Libet Clock


Author
Message
Psych_Josh
Psych_Josh
Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)
Group: Forum Members
Posts: 85, Visits: 397
Hi Dave,

Thanks, that script info was very useful. I'm still having a bit of trouble with trying to calculate the exact position of the dot as per the video from those values, given that each frame is 50ms, and the video lasts 1100ms (and therefore contain 20 frames).

The question participants are asked has been modified - not that they estimate how long it took for the sound to transpire, but where on the clock it occurred (via the dot, hence the needed to know the exact frame the sound occurred), given the 250ms delay.

Thus far I have:
(values.sound_starttime - values.responsetrial_starttime) / 20

This yields the correct response (at least, when compared to my own estimations several times), and I have created a score for this via:

<values>
/score = 0
</values>

/ontrialend = [values.score = (values.sound_starttime - values.responsetrial_starttime)/20]

Could you verify if this correct (sorry if that's a silly question)? The tricky thing is, though, that if the video obviously plays continuously, and therefore the dot rotates, the number of frames builds up over time, and so even if the clock rotates just once before the participant responds, this yields an inaccurate result (by (values.sound_starttime - values.responsetrial_starttime) being 1100 seconds out/per rotation). Is there a way to reset stimulus frames at the block level for background stimuli each time it loops (if that's a reasonable way of solving it)? I've attached the script in case that helps to give a clearer picture.

Many thanks,
Josh

 



Attachments
BindingVideo_0903.iqx (638 views, 7.00 KB)
Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 13K, Visits: 104K
> [...] Could you verify if this correct (sorry if that's a silly question)?

This seems fine to me, but I would defer to your own judgment and testing results rather than my gut instincts.

> [...] the number of frames builds up over time [...] 
> Is there a way to reset stimulus frames at the block level for background stimuli each time it loops (if that's a reasonable way of solving it)?

There is neither a way nor a need to reset stimulus frames at the block level (I'm not sure what that would mean, to be perfectly honest). This seems easily solvable by simple modulo arithmetic to me. What am I missing?

EDIT: On the 2nd question: Perhaps you could clarify by giving a concrete numerical example. Contrast the situation of response occurring during the initial rotation vs, say, it occurring after 2 or 3 rotations with actual (hypothetical) numbers.

Edited 8 Years Ago by Dave
Psych_Josh
Psych_Josh
Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)
Group: Forum Members
Posts: 85, Visits: 397
Hi Dave,

For example, the calculation works perfectly fine for:

(705 (values.sound_starttime) - 105 (values.responsetrial_starttime)) / 20 (fps)

= 30, the correct position of the dot on the clock-face.

However, if the clock has already rotated once, for example, then it becomes:

(1805 - 105) / 20

= 85

This would be okay if it was only one rotation, so therefore I could said conditional branches to conduct the arithmetic, but theoretically the participant could wait a several number of clock rotations before responding. When I meant reset the stimulus frames I speculated (hopefully) that each time the clock rotated that the frame count would reset back to 0 and avoid the accumulation. I apologise if the answer is quite simple and I'm missing it entirely,

Thanks again,
Josh
Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 13K, Visits: 104K
Okay, thanks. As indicated previously, modulo arithmetic to the rescue. Instead of calculating

(values.sound_starttime - values.responsetrial_starttime) / 20

you calculate values.sound_starttime *modulo* 1100 (the duration of one rotation):

(mod(values.sound_starttime,1100) - values.responsetrial_starttime) / 20

In numbers:
- 0 rotations completed
(mod(705,1100) - 105) / 20 = (705 - 105) / 20 = 30

- 1 rotation completed
(mod(1805,1100) - 105) / 20 = (705 - 105) / 20 = 30

- 2 rotations completed
(mod(2905,1100) - 105) / 20 = (705 - 105) / 20 = 30

Works for as many rotations as there may be.

Psych_Josh
Psych_Josh
Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)
Group: Forum Members
Posts: 85, Visits: 397
Hi Dave,

Thanks for the info, and the subsequent maths lesson too! I'm glad such a thing existed to make it an easy fix.

I would like to calculate the difference between the score and actual position of the dot - originally performed with:

[values.difference = values.score - values.actionq]

, however, obviously if the two scores are on different sides (e.g., the dot is at 58, but the participant answers 2) the results are skewed. Could this be resolved in a similar fashion? I apologise, again, if the answer is simpler than I am presuming.

The study is invariably ready, thanks to your help - however I have noticed in the video a small glitch - that is, when the clock rotates around successfully to its starting point, it'll make a minor jump from 60-5, as if it's skipping those frames of the video. This occurs in formats other than .avi, which isn't exactly the most accessible format if I am to run this study online. Do you know if this is purely do with the specific codec parameters of the video, given that the other options I have tried tend to be compressed .avi formats (e.g. mpg), or if there is a method around that so that more accessible formats can be used without this small glitch?

Many thanks (again, and again),
Josh
Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 13K, Visits: 104K
I'm not 100% clear what you want values.difference to reflect in concrete terms. Can you elaborate, please? As every so often, specific numerical examples illustrating a "normal" vs a "problematic" case may be helpful. Thanks.

Re. the video "jumping": Yes, it's most likely a codec issue. Not every format can be forced to arbitrary frame rates, e.g. MPEG does (to the best of my knowledge) *not* support 20fps, but only 25fps. You'll have to settle on some format / codec that supports the frame rate you need. I don't really see a perfect solution here with respect to running this online, since you just cannot know with 100% what kind of systems you will encounter in the wild and if they happen to have a proper codec available.

EDIT: FWIW, I haven't tried or tested it in this particular case, but creating an animated GIF from your clock still images may be a suitable alternative to a using an "actual" video / codec. Animated GIFs are handled by Inquisit's <video> element just like regular videos (AVI, MPG, WMV, etc.).

Note, though, that the initial (non-video) approach wasn't necessarily better in this regard; here, too, the varying performance characteristics of systems in the wild (different display refresh rates, keyboards with varying latency, etc.) would play a major role. On balance, I still believe the video-approach is preferable and will work better across a wider range of systems than the non-video approach.

Edited 8 Years Ago by Dave
Psych_Josh
Psych_Josh
Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)
Group: Forum Members
Posts: 85, Visits: 397
Hi Dave,

Sure - so, normal estimates of dot position contain estimates [values.actionq] vs the actual position [values.score].

For example, values.actionq = 30, values.score = 33, thus values.difference (values.score - values.actionq) = 3, an estimation reflecting some form of anticipatory judgement of the dot's position (a negative score would mean delayed judgement, in this instance). However, if values.actionq = 58, and values.score = 2, then values.difference = 56, which obviously would skew the results of the average of values.difference significantly. I've tried modulo arithmetic similar to the previous solution ([values.difference = (mod(values.score,60) - values.actionq)]), but, as you can guess, this did not work.

Thanks for the info re: videos, I think I'll just have to restrict participant pools to those able to run .avi's through Inquisit via some form of instruction for the time being.

Many thanks,
Josh
Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 13K, Visits: 104K
> [...] if values.actionq = 58, and values.score = 2, then values.difference = 56

And what you would be the "correct" result here? 4?

Psych_Josh
Psych_Josh
Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)Guru (11K reputation)
Group: Forum Members
Posts: 85, Visits: 397
Hi Dave,

Yes, 4 would be the correct answer - sorry for not being clear.

Many thanks,
Josh
Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 13K, Visits: 104K
Okay, thanks for the clarification. I'll have to think about this for a while when I have some quiet time.

Another minor point of confusion from your previous post for me: You define

values.difference = (values.score - values.actionq)

as per 

> [...] For example, values.actionq = 30, values.score = 33, thus values.difference (values.score - values.actionq) = 3,
> an estimation reflecting some form of anticipatory judgement of the dot's position (a negative score would mean
> delayed judgement, in this instance) [...]

Yet you seem to reverse the terms directly afterwards in

> [...] However, if values.actionq = 58, and values.score = 2, then values.difference = 56 [...]

Just plugging in the numbers in the equation

values.difference = (values.score - values.actionq)

would give a result of -56 (i.e., 2 - 58 = -56), not +56 as stated in the example. So, I'm wondering whether it's a mistake in the description of the "problematic" case or a mistake in the description of the equation.

Edited 8 Years Ago by Dave
GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Reading This Topic

Explore
Messages
Mentions
Search