Millisecond Forums

How to capture the presentation time between stimuli?

https://forums.millisecond.com/Topic27288.aspx

By zhaokoh - 6/14/2019

Hi there,

I was trying to capture the time between when a stimulus was presented until the stimulus was replaced by another stimulus, for e.g. the stimulus gdg_67soa below.

<trial pic_67soa>
/ stimulustimes = [
0 = fixation_cross;
1000 = gdg_67soa;
1067 = mask1;
1127 = mask2;
1187 = mask3;
1247 = mask4;
1307 = mask5;
]
/ timeout = parameters.fixationduration + 67 + (5 * 60)
/branch = [surveypage.word_generation]
/ ontrialbegin = [
values.trial_start = script.elapsedtime
]
/ontrialend = [
values.img_num = picture.gdg_67soa.currentvalue;
values.img_file = picture.gdg_67soa.currentitem;
values.soa = 67;
values.est_soa = script.elapsedtime - values.trial_start - 1000 - (5 * 60)
]
/ recorddata = true
</trial>

Here I am storing the elapsed time when the trial begins and derived the stimulus presentation time (for gdg_67soa) by using 
script.elapsedtime - values.trial_start - 1000 - (5 * 60)

I compared the derived values with the intended stimulustimes (67ms) - it is way over - double or more.

However, when I turn on the /audit = true, the time seems to be very close to 67ms - this leads me to think that my derivation above could be wrong, and might include any lag that I haven't factor in. I also try /pretrialpause but it does not seem to reduce the estimation.

Any comments would be much appreciated.

Thanks!


By Dave - 6/16/2019

zhaokoh - 6/14/2019
Hi there,

I was trying to capture the time between when a stimulus was presented until the stimulus was replaced by another stimulus, for e.g. the stimulus gdg_67soa below.

<trial pic_67soa>
/ stimulustimes = [
0 = fixation_cross;
1000 = gdg_67soa;
1067 = mask1;
1127 = mask2;
1187 = mask3;
1247 = mask4;
1307 = mask5;
]
/ timeout = parameters.fixationduration + 67 + (5 * 60)
/branch = [surveypage.word_generation]
/ ontrialbegin = [
values.trial_start = script.elapsedtime
]
/ontrialend = [
values.img_num = picture.gdg_67soa.currentvalue;
values.img_file = picture.gdg_67soa.currentitem;
values.soa = 67;
values.est_soa = script.elapsedtime - values.trial_start - 1000 - (5 * 60)
]
/ recorddata = true
</trial>

Here I am storing the elapsed time when the trial begins and derived the stimulus presentation time (for gdg_67soa) by using 
script.elapsedtime - values.trial_start - 1000 - (5 * 60)

I compared the derived values with the intended stimulustimes (67ms) - it is way over - double or more.

However, when I turn on the /audit = true, the time seems to be very close to 67ms - this leads me to think that my derivation above could be wrong, and might include any lag that I haven't factor in. I also try /pretrialpause but it does not seem to reduce the estimation.

Any comments would be much appreciated.

Thanks!



#1: The time you record /ontrialbegin is the time the trial object began executing. This is not necessarily identical to the time the trial can begin displaying stimuli. The trial has to wait for the start of the next display refresh cycle to do so.

#2: Your (5 * 60) term is idealized and assumes that your display / graphics card can actually hit those 60ms intervals precisely. Whether it can or not will depend on the display's refresh rate. You're likely accumulating error in your idealized calculation.

#3: The proper method is to do something like this:

/ ontrialend = [values.est_soa = picture.mask1.stimulusonset.1 - picture.gdg_67soa.stimulusonset.1]
By zhaokoh - 6/16/2019

Dave - 6/17/2019
zhaokoh - 6/14/2019
Hi there,

I was trying to capture the time between when a stimulus was presented until the stimulus was replaced by another stimulus, for e.g. the stimulus gdg_67soa below.

<trial pic_67soa>
/ stimulustimes = [
0 = fixation_cross;
1000 = gdg_67soa;
1067 = mask1;
1127 = mask2;
1187 = mask3;
1247 = mask4;
1307 = mask5;
]
/ timeout = parameters.fixationduration + 67 + (5 * 60)
/branch = [surveypage.word_generation]
/ ontrialbegin = [
values.trial_start = script.elapsedtime
]
/ontrialend = [
values.img_num = picture.gdg_67soa.currentvalue;
values.img_file = picture.gdg_67soa.currentitem;
values.soa = 67;
values.est_soa = script.elapsedtime - values.trial_start - 1000 - (5 * 60)
]
/ recorddata = true
</trial>

Here I am storing the elapsed time when the trial begins and derived the stimulus presentation time (for gdg_67soa) by using 
script.elapsedtime - values.trial_start - 1000 - (5 * 60)

I compared the derived values with the intended stimulustimes (67ms) - it is way over - double or more.

However, when I turn on the /audit = true, the time seems to be very close to 67ms - this leads me to think that my derivation above could be wrong, and might include any lag that I haven't factor in. I also try /pretrialpause but it does not seem to reduce the estimation.

Any comments would be much appreciated.

Thanks!



#1: The time you record /ontrialbegin is the time the trial object began executing. This is not necessarily identical to the time the trial can begin displaying stimuli. The trial has to wait for the start of the next display refresh cycle to do so.

#2: Your (5 * 60) term is idealized and assumes that your display / graphics card can actually hit those 60ms intervals precisely. Whether it can or not will depend on the display's refresh rate. You're likely accumulating error in your idealized calculation.

#3: The proper method is to do something like this:

/ ontrialend = [values.est_soa = picture.mask1.stimulusonset.1 - picture.gdg_67soa.stimulusonset.1]

Great! Thanks Dave - I will give it a try :)
By zhaokoh - 6/18/2019

Amazing Dave. This works! Thanks for your help here. So far, I am getting very small differences (approx +/- 5ms) between actual and expected timing across different monitors so pretty happy about this, there was 1 outlier out of ~200 trials (negligible). Thanks again.
By Dave - 6/18/2019

zhaokoh - 6/18/2019
Amazing Dave. This works! Thanks for your help here. So far, I am getting very small differences (approx +/- 5ms) between actual and expected timing across different monitors so pretty happy about this, there was 1 outlier out of ~200 trials (negligible). Thanks again.

Cool -- thanks for letting me know!