Verbal version of the Automated Operation Span task (Automated OSPAN)


Author
Message
s.laborde
s.laborde
Expert (1.1K reputation)Expert (1.1K reputation)Expert (1.1K reputation)Expert (1.1K reputation)Expert (1.1K reputation)Expert (1.1K reputation)Expert (1.1K reputation)Expert (1.1K reputation)Expert (1.1K reputation)
Group: Forum Members
Posts: 8, Visits: 7

Hi all, 



I'm trying to adapt the Automated OSPAN task of the Inquisit library to a verbal response mode, and I've been only partially successful. My main concern is whether the speech recognition engine is able to recognise letters.


Here are the different steps I followed:


First, changing the response mode to voice input:


<defaults>


/ canvassize = (100%,100%)

/ canvasaspectratio = (4,3)

/ screencolor = (white)

/ txcolor = (black)

/ txbgcolor = (transparent)

/ fontstyle = ("Verdana", 4.0%, true)

/ inputdevice = voice

/ minimumversion = "4.0.0.0"

/ halign = center

/ valign = center

</defaults>


Then, changing the way to progress after each instruction screen from "left mouse button" to the vocal command "Next"


<trial instructions>


/ pretrialpause = 250

/ posttrialpause = 250

/ stimulustimes = [1=instructions]

/ validresponse = ("next")

/ responsetime = 2000

/ recorddata = false

</trial>


So far, everything is working, I can go through the 3 first instructions screens saying "Next", meaning that the voice input is correctly recognised by Inquisit.


The problems starts when I start the practice, and I need to name the letters. Then Inquisit don't seem to recognise them, nothing happens, and even the words "Blank", "Clear" and "Exit" seem not to be recognised:


<trial recall_letter>


/ ontrialbegin = [values.recallcount+=1]

/ ontrialbegin = [if(values.recallcount==1)values.recalldelay=500 else values.recalldelay=0]

/ pretrialpause = values.recalldelay

/ stimulusframes = [1=WhiteScreen, F, H, J, K, L, N, P, Q, R, S, T, Y, _, clear, exit, recalledletters, recallprompt, letterstrings]

/ validresponse = (F, H, J, K, L, N, P, Q, R, S, T, Y, blank, clear, exit)

/ monkeyresponse = ("F", "H", "J", "K", "exit")

/ ontrialend = [if(trial.recall_letter.response!="exit" && trial.recall_letter.response!="clear")

     {item.RecalledLetters.item=trial.recall_letter.response; values.recalledletters=concat(values.recalledletters, trial.recall_letter.response)}]

/ ontrialend = [if(trial.recall_letter.response=="clear")

     {clear(item.RecalledLetters); values.recalledletters=""}]

/ responsemessage = (F, clickF, 150)

/ responsemessage = (H, clickH, 150)

/ responsemessage = (J, clickJ, 150)

/ responsemessage = (K, clickK, 150)

/ responsemessage = (L, clickL, 150)

/ responsemessage = (N, clickN, 150)

/ responsemessage = (P, clickP, 150)

/ responsemessage = (Q, clickQ, 150)

/ responsemessage = (R, clickR, 150)

/ responsemessage = (S, clickS, 150)

/ responsemessage = (T, clickT, 150)

/ responsemessage = (Y, clickY, 150)

/ responsemessage = (clear, clickclear, 150)

/ responsemessage = (exit, clickexit, 150)

/ responsemessage = (_, click_, 150)

/ branch = [if(trial.recall_letter.response=="exit")trial.letter_feedback else trial.recall_letter]

/ trialdata = [recalledletters]

/ recorddata = true

</trial>



Any ideas how I could make Inquisit to recognise the letters when a person say them? I even tried to write them "phonetically", like I thought the speech recognition engine would recognise them, but nothing changed. 


Many thanks in advance for your help, 


Sylvain 

GO

Merge Selected

Merge into selected topic...



Merge into merge target...



Merge into a specific topic ID...




Threaded View

Reading This Topic

Explore
Messages
Mentions
Search