Hi all,
I'm struggling with resolving some timing issues I've encountered.
Basically:
I'm presenting a stimulus (stimA) at t = 0 ms, and then erasing that stimulus with another blank stimulus (stimB) at t = 500ms (this interval needs to be tight), the trial then continues for another 1000 ms.
Now, I want to start logging responses from the start, so I use the / beginresponsetime = 0.
However, if I do this, the stimB.stimulusonset varies as from 507 ms up to as high as 690 ms, which is unacceptable for this experiment.
If I set the / beginresponsetime = 500+ ms, stimB.stimulusonset is always a tight 500 ms.
The original trial:<trial lettertask_notarget>
/ skip = [values.run_exit || values.run_redoblock ]
/ pretrialpause = 250
/ stimulustimes = [0 = stimA; 500 = stimB]
/ timeout = 1750
/ beginresponsetime = 0
/ isvalidresponse = [if(values.resp == 0 && trial.lettertask_notarget.response != 0)
{values.resp = trial.lettertask_notarget.response}; values.resp == 999]
/ responseinterrupt = immediate
</trial>
I tried to see what was causing the problem:
- removing /isvalidresponse expression --> same problem
- giving no response at all --> same problem
- different computer --> same problem (running on Windows 8.1, core i7 haswell, 16gb ram)
Is there that much overhead from checking for a response (even in the absence of a response) that subsequent stimuli get delayed as much as 190 ms?
And if so, is there any way to increase the timing precision?
P.S. I've though about the possibility of splitting the trial up in two parts (present stimA & present stimB), but I think the problem would be the same.
- Either in the second trial I'd have to log responses before the stimB onset, which would introduce the exact same problem
- Or I'd have to let the first trial run for 500ms and then start the second trial, but the overhead from loading the stimuli would introduce unknown variability
between the trials, and using a posst / pretrialpause would mean participants wouldn't be able to respond for that interval.