+xHi there,
I ran a 3-part study over about a 1.5 year period mainly using Inquisit 5 (upgraded from 4 part-way through but that didn't have any negative effects that I can see). The key tasks I want to talk about here are the Cued Go/No-Go, Single N-Back, Wisconsin Card Sort Task, CRTT, and PSAP. I used the scripts from the millisecond website and did not edit the parameters.
The majority of the data saved appropriately. For example I wanted both the raw and summary data from each task, the summary data being most important. In some cases, data for certain participants did not record at all for some of these tasks, even though they recorded for other tasks (either run through other programs, when starting inquisit again for a different task, or when using Inquisit to call MediaLab).
I've had a look through similar forum questions but haven't managed to get to the bottom of it yet. Points that might be relevant are:
- I ran the tasks off the C-drive of multiple computers (which we are supposed to do in our research lab) and backed-up the data throughout. I've checked all for the missing data
- There were no network/computer issues during the time of recording
- I think that we used the most update version of Inquisit 5 however, I can check this
- For some of the participants, I noted that the data log said that the results were saved to the O-drive of the computer (even though they were run off C-drive). However, there was nothing saved in the O-drive.
Another issue is that at times, the raw data saved for certain tasks (seemingly randomly across participants) but the summary data did not (e.g., with the cued GNG task). This will be another broad question, however, is there a way to calculate the relevant summary data (/meanRT_verticalcue_gotarget, /meanRT_horizontalcue_gotarget, /inhibitionerror_v)?
Please let me know if you need further info or copies of the scripts.
Thanks,
Jo
Here are some things that come to mind:
#1: Summary files might not have been created if a script was terminated at a point where there was no data to record yet. E.g. during instruction pages at the start, or during a practice block that is set to not record any data (per /recorddata = false).
#2: If you did not specify /separatefiles = true in the <summarydata> element, then additional participants' data would have been appended to an already existing summary data file on that computer. I.e. you might have summary files that contain data for more than a single participant. This seems worth checking.
#3: Things can also go wrong when some other process has an existing data file locked. Suppose you have the script instructed to write to a general (multi-participant) summary data file as in #2 above. If you open that file in some other application, say, Excel to inspect it, but don't close the file before running the next participant, no new dta could be appended to the summary file.
#4: Regarding "For some of the participants, I noted that the data log said that the results were saved to the O-drive of the computer [...]": Inquisit wouldn't just decide to do that by itself, i.e. in those cases the script was run from the O-drive with high certainty, contrary to the established lab protocol (run from C-drive). I have no way of knowing how that drive was configured, but if Inquisit / the executing user account had no write permissions there or the network drive failed in some other way, it's possible that those data files could not have been written.
#5: Expanding on #4 more generally, it's always possible that you or someone else involved with the data collection forgot to back up, accidentally deleted, or misplaced a few files. I cannot judge how (un)likely this is in your particular case, but judging from my own experience things like these happen (more often than outright technical failures).
#6: Finally, if you have complete raw data files, you should be able to calculate all relevant summary statistic based on the data in those raw data files. This is merely a matter of processing them with your preferred statistical analysis package and aggregating the raw latency and correctness data accordingly.