eyetracker element


Author
Message
esummerell
esummerell
Associate Member (299 reputation)Associate Member (299 reputation)Associate Member (299 reputation)Associate Member (299 reputation)Associate Member (299 reputation)Associate Member (299 reputation)Associate Member (299 reputation)Associate Member (299 reputation)Associate Member (299 reputation)
Group: Forum Members
Posts: 9, Visits: 24
Hi there, 
I am currently working to set up a script for an eyetracking experiment where I want to be able to record the time that the participant spends looking at different stimuli (as seems to be detailed above). I'm wondering if there have been any updates that allow this to be possible?
Thanks!
Liz
seandr
seandr
Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)
Group: Administrators
Posts: 1.3K, Visits: 5.6K
Alas, no. Sounds like this would have to computed after the fact with the gaze point data as Dave suggested.

I've added this scenario to our list of features for a future release. It's obviously of general interest for preferential looking studies.

-Sean
sdeanda
sdeanda
Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)
Group: Forum Members
Posts: 9, Visits: 31
Thanks for the response!

Because we are working with toddlers, they often look off screen. Would there be a way to call /isvalidresponse every millisecond or frame to evaluate the location of gaze instead of only evaluating when there is a change?

seandr
seandr
Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)Supreme Being (144K reputation)
Group: Administrators
Posts: 1.3K, Visits: 5.6K
This might be possible. On each trial, present a transparent <shape> stimulus that is sized to 100% of the screen width and height along with the target and distractor and include the shape stimulus in the validresponse list:


<shape screen>
/ size = (100%, 100%)
/ color = transparent
</shape>

<trial test>
/ inputdevice = eyetracker
/ stimulusframes = [1=target, distractor, screen]
/ validresponse = (screen, one, two)
/ isvalidresponse = []
/ trialduration = 2500
</trial>


With each change in gaze point, /isvalidresponse is called. Within isvalidresponse, you can check the trial.test.response property to determine whether the infant is looking at either stimulus or the screen. The code here could then note when this changes and tally up the total time for either stimulus. For example, if gaze transitions from screen to the distractor, you'd store the onset time of the transition in a value, e.g. value.distractoronset = script.elapsedtime. If gaze transitions back to the screen or to the target, you'd tally the total time, e.g., value.distractorgazetime += (script.elapsedtime - value.distractoronset). Your code would need to handle the various possible transitions and make sure values are initialized properly, but it shouldn't be too complicated. Also, /isvalidresponse should always return false or the trial will end.

One potential issue - if  gazepoint transitions from the target directly off screen, that transition would be missed. Not sure how likely that is.

-Sean



Edited 10 Years Ago by seandr
Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 13K, Visits: 104K
> The /aoidurationthreshold attribute must be tracking gaze to each picture to evaluate whether a given threshold is reached, the
> question is whether I can record this.

Nope, not as far as I am aware at least. Calculating cumulative viewing time from the raw gaze data would be the way to go then.

sdeanda
sdeanda
Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)
Group: Forum Members
Posts: 9, Visits: 31
I thought of the same thing, except that this assumes that the participant will look at the screen for the entire trial length. Because we are working with children, they often look off screen. So, for example, you might have someone look at the screen for 2000 out of the total trial duration of 2500 ms. In this case, they could have looked at the target for 1100ms, and the distractor for 900 ms, and we would want to credit them for looking at the target longer than the distractor. There is a minimum looking time, of course, since we would want gaze durations <500 ms to be counted.

The /aoidurationthreshold attribute must be tracking gaze to each picture to evaluate whether a given threshold is reached, the question is whether I can record this.

Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 13K, Visits: 104K
I am wondering though, if something along the lines of

<eyetracker>
/ aoidurationthreshold = 1251
...
</eyetracker>

<trial mytrial>
/ stimulusframes = [1=target, distractor]
/ validresponse = (target, distractor)
/ correctresponse = (target)
/ inputdevice = eyetracker
/ trialduration = 2500
...
</trial>

would work...

Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 13K, Visits: 104K
I don't think that's possible. You would have to derive the the cumulative looking time for both target and distractor from the collected gaze data *after data collection is complete* and then score correctness based on that.

sdeanda
sdeanda
Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)Esteemed Member (1.6K reputation)
Group: Forum Members
Posts: 9, Visits: 31
Sorry for being unclear!

For the preferential looking task, every trial will last 2500 ms. I want to record the looking time to each of the two images (a target image and a distractor image) presented during this time window. A correct response is awarded when the looking time to target > distractor. I thought that I might be able to use inquisit for this task since it does seem to capture looking time to areas of interest.


Thanks!!
Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 13K, Visits: 104K
Then -- unless I'm missing something -- you should simply be able to set /aoithresholdduration to a value equal or greater to /trialduration.

GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Reading This Topic

Explore
Messages
Mentions
Search