Reliability and Validity of Inquisit 4 Web


Author
Message
Kbcrowe1
Kbcrowe1
Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)
Group: Forum Members
Posts: 2, Visits: 5
Excellent, thanks very much Dave!
Dave
Dave
Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)Supreme Being (1M reputation)
Group: Administrators
Posts: 13K, Visits: 103K
- For most tasks, differences in participants' computers' processing speeds will not matter much. Computers these days are all quite powerful, and even on the low end the vast majority of tasks will not exhaust a given computer's resources (see e.g. http://link.springer.com/article/10.3758%2Fs13428-014-0471-1 ).
- Technical factors *may* come into play if a specific task requires very specific technical parameters in order to be valid. For example, a task where you have participants judge absolute color values would not be suitable for being administered on their home computers which are not under your control. The validity of such a task would hinge on the computer displays all being calibrated identically, which is not something you can ensure outside of a lab environment where the measurement apparatus is fully under your control.
- The variability in technical terms between participants' home computers is unlikely to be greater than the variability between computers in, say, different labs across the world.
- Other non-technical factors are likely to play a larger role, such as the fact that you have no control over the participant's environment (noise level, other distractions, no way to clarify instructions, etc.). How malleable a specific task is to such variations is not something that can be answered generally and will likely vary with the specific task demands (e.g. tasks that demand sustained attention or complex operations over a long period of time vs. tasks that require relatively simple judgments or decisions).
- The published literature on classical paper & pencil or lab tests vs. their online counterparts largely shows they produce comparable results.

A lot has been written and published about the advantages as well as potential pitfalls of online experimentation by authors such as Reips. There also are quite a number of studies comparing online vs. lab administrations of a given experiment (a fairly recent example is http://www.ppgia.pucpr.br/ismir2013/wp-content/uploads/2013/09/59_Paper.pdf ).

Hope this helps.

Kbcrowe1
Kbcrowe1
Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)Respected Member (313 reputation)
Group: Forum Members
Posts: 2, Visits: 5
Our team is interested in publishing some findings that we gathered using the Inquisit 4 Web software. One concern is that the validity of the cognitive tests administered on participants' home computers may not be adequate/comparable to that in a controlled laboratory, given differences in the participants' home computers' processing speeds, etc. Is anyone able to comment on how the Inquisit 4 Web software can stack up to such concerns? Or does anyone know of any published research using Inquisit 4 Web or similar web-based administration of neuro/cog tests administered on participants' home computers that can speak to the validity of these sorts of measures?
GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Reading This Topic

Explore
Messages
Mentions
Search