Millisecond Forums

eyetracker element

https://forums.millisecond.com/Topic14448.aspx

By sdeanda - 9/29/2014

Hi,

Where can I find the documentation for the "aoidurationthreshold" attribute for the eyetracker element ? I see it used in the preferential looking example script, but can't seem to find the documentation/syntax for the attribute. I'm trying to use the preferential looking script but I want to record the total looking time to each image on each trial rather than having it time out after a threshold is reached. Any help is appreciated!

Thanks,

SD
By Dave - 9/29/2014

/ aoidurationthreshold determines the time a given participant must gaze within a specific area of interest before a hit is scored / that gaze is scored as a response.

> I'm trying to use the preferential looking script but I want to record the total looking time to each image on each trial rather than
> having it time out after a threshold is reached.

The problem with this idea is: How is the given trial supposed to terminate, i.e., accept a gaze at an object as a response if there is no threshold of any kind specified?

<trial puppypuppy>
/ ontrialbegin = [values.marker = (picture.puppyleft.currentindex * 100) + picture.puppyright.currentindex;]
/ ontrialbegin = [port.marker.setitem(values.marker, 1);]
/ stimulustimes= [1=puppyleft, puppyright, marker]
/ inputdevice = eyetracker
/ validresponse = (puppyleft, puppyright)

/ screencapture = true
/ draw = pen
/ showmousecursor = true
</trial>
By sdeanda - 9/29/2014

Hi,

The trial duration is set to 2500 milliseconds. A correct look is marked when the participant looks longer to the target image than the distractor image on a given trial.

-SD
By Dave - 9/29/2014

Then -- unless I'm missing something -- you should simply be able to set /aoithresholdduration to a value equal or greater to /trialduration.
By sdeanda - 9/29/2014

Sorry for being unclear!

For the preferential looking task, every trial will last 2500 ms. I want to record the looking time to each of the two images (a target image and a distractor image) presented during this time window. A correct response is awarded when the looking time to target > distractor. I thought that I might be able to use inquisit for this task since it does seem to capture looking time to areas of interest.


Thanks!!
By Dave - 9/29/2014

I don't think that's possible. You would have to derive the the cumulative looking time for both target and distractor from the collected gaze data *after data collection is complete* and then score correctness based on that.
By Dave - 9/29/2014

I am wondering though, if something along the lines of

<eyetracker>
/ aoidurationthreshold = 1251
...
</eyetracker>

<trial mytrial>
/ stimulusframes = [1=target, distractor]
/ validresponse = (target, distractor)
/ correctresponse = (target)
/ inputdevice = eyetracker
/ trialduration = 2500
...
</trial>

would work...
By sdeanda - 9/29/2014

I thought of the same thing, except that this assumes that the participant will look at the screen for the entire trial length. Because we are working with children, they often look off screen. So, for example, you might have someone look at the screen for 2000 out of the total trial duration of 2500 ms. In this case, they could have looked at the target for 1100ms, and the distractor for 900 ms, and we would want to credit them for looking at the target longer than the distractor. There is a minimum looking time, of course, since we would want gaze durations <500 ms to be counted.

The /aoidurationthreshold attribute must be tracking gaze to each picture to evaluate whether a given threshold is reached, the question is whether I can record this.
By Dave - 9/29/2014

> The /aoidurationthreshold attribute must be tracking gaze to each picture to evaluate whether a given threshold is reached, the
> question is whether I can record this.

Nope, not as far as I am aware at least. Calculating cumulative viewing time from the raw gaze data would be the way to go then.
By seandr - 9/30/2014

This might be possible. On each trial, present a transparent <shape> stimulus that is sized to 100% of the screen width and height along with the target and distractor and include the shape stimulus in the validresponse list:


<shape screen>
/ size = (100%, 100%)
/ color = transparent
</shape>

<trial test>
/ inputdevice = eyetracker
/ stimulusframes = [1=target, distractor, screen]
/ validresponse = (screen, one, two)
/ isvalidresponse = []
/ trialduration = 2500
</trial>


With each change in gaze point, /isvalidresponse is called. Within isvalidresponse, you can check the trial.test.response property to determine whether the infant is looking at either stimulus or the screen. The code here could then note when this changes and tally up the total time for either stimulus. For example, if gaze transitions from screen to the distractor, you'd store the onset time of the transition in a value, e.g. value.distractoronset = script.elapsedtime. If gaze transitions back to the screen or to the target, you'd tally the total time, e.g., value.distractorgazetime += (script.elapsedtime - value.distractoronset). Your code would need to handle the various possible transitions and make sure values are initialized properly, but it shouldn't be too complicated. Also, /isvalidresponse should always return false or the trial will end.

One potential issue - if  gazepoint transitions from the target directly off screen, that transition would be missed. Not sure how likely that is.

-Sean


By sdeanda - 10/1/2014

Thanks for the response!

Because we are working with toddlers, they often look off screen. Would there be a way to call /isvalidresponse every millisecond or frame to evaluate the location of gaze instead of only evaluating when there is a change?
By seandr - 10/1/2014

Alas, no. Sounds like this would have to computed after the fact with the gaze point data as Dave suggested.

I've added this scenario to our list of features for a future release. It's obviously of general interest for preferential looking studies.

-Sean
By esummerell - 2/9/2022

Hi there, 
I am currently working to set up a script for an eyetracking experiment where I want to be able to record the time that the participant spends looking at different stimuli (as seems to be detailed above). I'm wondering if there have been any updates that allow this to be possible?
Thanks!
Liz