becgwin
|
|
Group: Forum Members
Posts: 53,
Visits: 313
|
Thanks for explaining that Dave. I really appreciate your help. Have a great day.
Rebecca
|
|
|
Dave
|
|
Group: Administrators
Posts: 13K,
Visits: 104K
|
Pixels are always indexed starting with 0; considering only the y-dimension, that means the top row of pixels on any display has a value of 0. Displays differ in their dimensions. I.e., on a display with a height of 1080 pixels, the bottom row of pixels has a value of 1079. On a display with height 1440 pixels, that bottom row has value 1439 and so forth.
It's not possible to answer if and how strongly this may affect RT. More pixels does not necessarily mean that the display has a greater *physical* dimension than a display with fewer pixels / lower resolution. Some phones with tiny 7'' screens have more pixels crammed into their display than the 14'' laptop monitor I'm staring at as I write this. I.e., the amount of pixels tells you little -- if anything -- about the physical distance one has to travel to fully zoom in / zoom out an image in the task.
|
|
|
becgwin
|
|
Group: Forum Members
Posts: 53,
Visits: 313
|
Hi Dave,
Thanks for that - the error now works perfectly. I'll keep you updated if you like on what happens in the next round of testing re the unusual results - hopefully it won't happen again - it is really strange because it is not just AAT1, it also happened to some-one in AAT2 and someone in AAT7 (???). I also went back to check it wasn't specific to a sequence I had generated but it doesn't seem to be. I don't like running the script again when it is producing some odd results, but I can't think of anything else to check... And sorry just one other question - for values.mouse_y while for 'push' it is always 0, for 'pull' the values range e.g. 1439, 1079, 899, 767. Can you tell me what this means and if it is important? For example is it pixels that change with the size of the screen? Could this affect response times?
Thanks again,
Rebecca
|
|
|
Dave
|
|
Group: Administrators
Posts: 13K,
Visits: 104K
|
Re. #1: When you insert those attributes into trial.increase and trial.decrease, you ought to make sure to adjust them to actually insert the error-text into those trials' frames, not the practice trials'. You'll also want to reset those trials' frames as in the corresponding practice trials:
<trial decrease> ... /ontrialbegin = [if (values.expcondition == 1 && values.targetformat == "p") trial.decrease.insertstimulustime(text.error, 0)] /ontrialbegin = [if (values.expcondition == 2 && values.targetformat == "l") trial.decrease.insertstimulustime(text.error, 0)] ... /ontrialend = [trial.decrease.resetstimulusframes()] ... </trial>
Re. #2: The pixeltolerance ultimately determines how far (in terms of pixels) the mouse has to be moved from its initial position before Inquisit considers the movement as a deliberate response (instead of inadvertent wiggling). Since pixels don't have universal physical dimension (their physical size differs from display to display), it's difficult to answer your question in general terms. My gut feeling is: I'd probably set the tolerance a little higher (e.g. 10 pixels).
If I happen to come up with any new ideas re. the oddly low latencies in your AAT_1 condition, I will surely let you know. So far, though, nothing I can think of given the available data would satisfactorily explain those. It just doesn't seem to make any sense...
|
|
|
becgwin
|
|
Group: Forum Members
Posts: 53,
Visits: 313
|
Hi Dave,
Thanks for your input - I went back and double checked that the script I sent you was the same as the one on the Inquisit site and it was - I may have reloaded it after subject 9 (as a consequence of changing the batch script) but I definitely did not for the two subjects that came later who also had the same issue, so it does seem to be systematic and like you I have no idea why it occurred. If you think of anything could you let me know? I am about to start running the survey again and would like to avoid losing data over these strange results! Their specificity to one particular category (and not the same one each time) and over only three participants is what makes is so difficult to pinpoint. Also, could I ask you two more questions:
1. Inserting an error message for the real trials. I inserted the recommended error message i.e. / errormessage = true(error, 0) in trials AAT1 - AAT8 as recommended if error feedback is required, but I find that while in the practice trials the error message stays on for the duration of the picture, in the real trials it only flashes. I went back and compared the script for the two trial types, and found that there were extra lines in the practice trials (for increase and decrease) that were not in the real trials as follows:
for practicedecrease: /ontrialbegin = [if (values.expcondition == 1 && values.targetformat == "p") trial.practicedecrease.insertstimulustime(text.error, 0)] /ontrialbegin = [if (values.expcondition == 2 && values.targetformat == "l") trial.practicedecrease.insertstimulustime(text.error, 0)]
for practiceincrease: /ontrialbegin = [if (values.expcondition == 1 && values.targetformat == "l") trial.practiceincrease.insertstimulustime(text.error, 0)] /ontrialbegin = [if (values.expcondition == 2 && values.targetformat == "p") trial.practiceincrease.insertstimulustime(text.error, 0)]
I inserted these into trialincrease and trialdecrease for the real trials but this did not seem to fix it. Have you any suggestions as to why this occurs and how to lengthen the appearance of the error message for the real trials?
2. The pixeltolerance is set to 5 - does this seem too low to you? I have looked in help but can't find what I need there and it is difficult to test just by changing the number.
Thanks very much again Dave for your time and your help.
Rebecca
|
|
|
Dave
|
|
Group: Administrators
Posts: 13K,
Visits: 104K
|
Thanks for attaching the files. I agree that the issue looks systematic in that (1) only one condition shows such low values in values.RT (AAT_1) and (2) all 24 AAT_1 trials seem to be more or less affected. None of the other conditions show anything like that as far as I can see (at least not in the specific data file you attached). Beyond that, I don't see any particular pattern.
I also don't see anything in the script you attached [1] that could possibly cause this -- for all intents and purposes, there is no difference in the setup of the AAT_1 vs. AAT_2, etc. trials. I.e., a hypothetical mistake in the script should affect all conditions, not just AAT_1. Similarly, a hypothetical bug in Inquisit should affect all conditions, perhaps randomly, but it should not affect only one set of trials consistently.
Likewise, a bug in the script or in Inquisit should have affected a greater portion of you participants, not just three (?). And if there were something special about those three participants' computers (say, a buggy mouse or touchpad driver), one would not expect the issue to only surface in the AAT_1 condition.
Long story short: It's weird and definitely looks systematic -- as for the source / cause of that systematicity, I unfortunately have no idea.
[1] The assumption here is that said script is identical to the one those data were collected with. If data were collected with a different version / revision of the script, all bets are off (e.g. deploy some script online, notice a mistake in it at a later point, fix mistake and deploy fixed version). Whether something like that occurred here, I have no way of knowing.
|
|
|
becgwin
|
|
Group: Forum Members
Posts: 53,
Visits: 313
|
Hi Dave,
Thanks so much for looking over it. In the script, I have actually used four categories - chocolate, healthy snacks, neutral and control - had 192 trials (not including practice trials) and used tilting as the irrelevant feature stimulus. I also had to generate a number of sequences (no more than 3 formats or 3 categories in a row) as I increased the number of trials beyond what the sequence generation script could handle. I also added an error signal for the real trials.
Cheers,
Rebecca
|
|
|
Dave
|
|
Group: Administrators
Posts: 13K,
Visits: 104K
|
Provide the script (and perhaps one of the data files illustrating the issue) and I'll give it a look. You can attach both to this thread using by clicking +Insert when posting a reply.
|
|
|
becgwin
|
|
Group: Forum Members
Posts: 53,
Visits: 313
|
Thanks Dave. It is just a bit strange because for each of the three participants who had the same pattern i.e. very quick responses for just one group of pictures and one condition such as chocolate and push, all their other responses were normal. I would have thought if they were moving their hand before the picture appeared it would have been more random across all the picture groups/conditions? What do you think? Could there be a glitch with my script - although it seems to be only for three participants out of 80. Would it be possible for you to glance over the script and make sure there is nothing wrong with it?
Thanks again,
Rebecca
|
|
|
Dave
|
|
Group: Administrators
Posts: 13K,
Visits: 104K
|
Theoretically, those times are possible if the participant already starts moving the mouse in the "start"-trial (the one where you click the center of the screen). While possible, such values are clearly outliers and I would either remove them or otherwise treat them as such during data analysis.
|
|
|