Does anyone know how I go about presenting text and audio together? I have a number of sound items which i have created trails for depending on the number of stressed syllables, e.g.
<item sounditems1>/ 1 = "Peter1.wav" / 2 = "picked a1.wav"</item><item sounditems2>/ 1 = "Peterpiper.wav"/ 2 = "twisted neck.wav"/ 3 = "picked tiger2.wav" </item>
All I am hoping to do is present what is spoken in the audio files on the screen as text at the same time. In order to do this I tried to create separate text stimuli with the written versions of the audio assuming I could link them and create trails which presented them both at the same time, but this is where I'm running into difficulties.
Any guidance on the most efficient way to do this greatly appreciated.
That's exactly what you need to do:
<trial mytrial>/ stimulusframes = [1=mysound, mytext][...]</trial><sound mysound>/ items = mysounditems/ select = noreplace</sound>
<text mytext>/ items = mytextitems/ select = current(mysound)</text><item mysounditems>/ 1 = "a.wav"/ 2 = "b.wav"[...]</item><item mytextitems>/ 1 = "written contents of a.wav"/ 2 = "written contents of b.wav"[...]</item>
See "How to present stimulus pairs" topic in the Inquisit documentation for further details and examples.
Yeah, got it! Thanks.
On a related note, do you know how I could move a line underneath the text stimulus in time with the audio?
There's no straightforward way to do this. Your best option is to simply create videos (i.e., including your sounds and the text/animation) and display them via the <video> element instead of using separate <text> and <sound> elements.