Millisecond Forums

Checkboxes and know when pages change forward & backward?

https://forums.millisecond.com/Topic22305.aspx

By beaslera - 8/19/2017

During part of my experiment, I would like to use checkboxes/radiobuttons and have participants be able to move forward & backward through question pages.
I also want to know when the participant started looking at a new page or went back to look at a previous page.
Surveys can do the former natively, and blocks can do the latter through the cumulative latency of each trial.

Unfortunately, surveys only record one row of data.  So if a participant moves from page 1 to 2 and then back to 1, I wouldn't know the times of those transitions.
Meanwhile, blocks can allow forward & backward movement through trial branching (e.g., having the participant click a picture in a trial that looks like a button and then branch if they did), but checkboxes would be tedious to code the same way (and difficult to edit after coding).
I could put a surveypage into a block, but that doesn't allow for back buttons nor for pictures of buttons that I could use for branching.

Is there a way to get both the things I want?
Perhaps there's a way to continuously record a datapoint and I can record some value that corresponds to the page?
Or some way to record the times that surveypages are shown or changed?

Thank you.
By Dave - 8/21/2017

beaslera - Saturday, August 19, 2017
During part of my experiment, I would like to use checkboxes/radiobuttons and have participants be able to move forward & backward through question pages.
I also want to know when the participant started looking at a new page or went back to look at a previous page.
Surveys can do the former natively, and blocks can do the latter through the cumulative latency of each trial.

Unfortunately, surveys only record one row of data.  So if a participant moves from page 1 to 2 and then back to 1, I wouldn't know the times of those transitions.
Meanwhile, blocks can allow forward & backward movement through trial branching (e.g., having the participant click a picture in a trial that looks like a button and then branch if they did), but checkboxes would be tedious to code the same way (and difficult to edit after coding).
I could put a surveypage into a block, but that doesn't allow for back buttons nor for pictures of buttons that I could use for branching.

Is there a way to get both the things I want?
Perhaps there's a way to continuously record a datapoint and I can record some value that corresponds to the page?
Or some way to record the times that surveypages are shown or changed?

Thank you.

What you can do, potentially, is to run your pages via a <survey> (i.e., you get back and forth navigation between pages) and log the times a given page is displayed in <values>. You can then log those values to a <summarydata> file. In a nutshell:

<values>
/ page_a_times = ""
/ page_b_times = ""
/ page_c_times = ""
</values>

<survey mysurvey>
/ pages = [1=a; 2=b; 3=c]
</survey>

<surveypage a>
/ ontrialbegin = [
    values.page_a_times=concat(concat(values.page_a_times, script.elapsedtime),",");
]
/ questions = [1=a_question]
</surveypage>

<surveypage b>
/ ontrialbegin = [
    values.page_b_times=concat(concat(values.page_b_times, script.elapsedtime),",");
]
/ questions = [1=b_question]
</surveypage>

<surveypage c>
/ ontrialbegin = [
    values.page_c_times=concat(concat(values.page_c_times, script.elapsedtime),",");
]
/ questions = [1=c_question]
</surveypage>

<radiobuttons a_question>
/ options = ("A1", "A2", "A3")
</radiobuttons>

<radiobuttons b_question>
/ options = ("B1", "B2", "B3")
</radiobuttons>

<radiobuttons c_question>
/ options = ("C1", "C2", "C3")
</radiobuttons>

<summarydata>
/ columns = (script.subjectid script.groupid script.startdate script.starttime values.page_a_times, values.page_b_times, values.page_c_times)
/ separatefiles = true
</summarydata>

Hope this helps!

By beaslera - 8/21/2017

Dave - Monday, August 21, 2017
beaslera - Saturday, August 19, 2017
During part of my experiment, I would like to use checkboxes/radiobuttons and have participants be able to move forward & backward through question pages.
I also want to know when the participant started looking at a new page or went back to look at a previous page.
Surveys can do the former natively, and blocks can do the latter through the cumulative latency of each trial.

Unfortunately, surveys only record one row of data.  So if a participant moves from page 1 to 2 and then back to 1, I wouldn't know the times of those transitions.
Meanwhile, blocks can allow forward & backward movement through trial branching (e.g., having the participant click a picture in a trial that looks like a button and then branch if they did), but checkboxes would be tedious to code the same way (and difficult to edit after coding).
I could put a surveypage into a block, but that doesn't allow for back buttons nor for pictures of buttons that I could use for branching.

Is there a way to get both the things I want?
Perhaps there's a way to continuously record a datapoint and I can record some value that corresponds to the page?
Or some way to record the times that surveypages are shown or changed?

Thank you.

What you can do, potentially, is to run your pages via a <survey> (i.e., you get back and forth navigation between pages) and log the times a given page is displayed in <values>. You can then log those values to a <summarydata> file. In a nutshell:

<values>
/ page_a_times = ""
/ page_b_times = ""
/ page_c_times = ""
</values>

<survey mysurvey>
/ pages = [1=a; 2=b; 3=c]
</survey>

<surveypage a>
/ ontrialbegin = [
    values.page_a_times=concat(concat(values.page_a_times, script.elapsedtime),",");
]
/ questions = [1=a_question]
</surveypage>

<surveypage b>
/ ontrialbegin = [
    values.page_b_times=concat(concat(values.page_b_times, script.elapsedtime),",");
]
/ questions = [1=b_question]
</surveypage>

<surveypage c>
/ ontrialbegin = [
    values.page_c_times=concat(concat(values.page_c_times, script.elapsedtime),",");
]
/ questions = [1=c_question]
</surveypage>

<radiobuttons a_question>
/ options = ("A1", "A2", "A3")
</radiobuttons>

<radiobuttons b_question>
/ options = ("B1", "B2", "B3")
</radiobuttons>

<radiobuttons c_question>
/ options = ("C1", "C2", "C3")
</radiobuttons>

<summarydata>
/ columns = (script.subjectid script.groupid script.startdate script.starttime values.page_a_times, values.page_b_times, values.page_c_times)
/ separatefiles = true
</summarydata>

Hope this helps!


Thank you Dave.  That looks like it will work out for me.  I appreciate that you provided a script.
By beaslera - 8/26/2017

beaslera - Tuesday, August 22, 2017
Dave - Monday, August 21, 2017
beaslera - Saturday, August 19, 2017
During part of my experiment, I would like to use checkboxes/radiobuttons and have participants be able to move forward & backward through question pages.
I also want to know when the participant started looking at a new page or went back to look at a previous page.
Surveys can do the former natively, and blocks can do the latter through the cumulative latency of each trial.

Unfortunately, surveys only record one row of data.  So if a participant moves from page 1 to 2 and then back to 1, I wouldn't know the times of those transitions.
Meanwhile, blocks can allow forward & backward movement through trial branching (e.g., having the participant click a picture in a trial that looks like a button and then branch if they did), but checkboxes would be tedious to code the same way (and difficult to edit after coding).
I could put a surveypage into a block, but that doesn't allow for back buttons nor for pictures of buttons that I could use for branching.

Is there a way to get both the things I want?
Perhaps there's a way to continuously record a datapoint and I can record some value that corresponds to the page?
Or some way to record the times that surveypages are shown or changed?

Thank you.

What you can do, potentially, is to run your pages via a <survey> (i.e., you get back and forth navigation between pages) and log the times a given page is displayed in <values>. You can then log those values to a <summarydata> file. In a nutshell:

<values>
/ page_a_times = ""
/ page_b_times = ""
/ page_c_times = ""
</values>

<survey mysurvey>
/ pages = [1=a; 2=b; 3=c]
</survey>

<surveypage a>
/ ontrialbegin = [
    values.page_a_times=concat(concat(values.page_a_times, script.elapsedtime),",");
]
/ questions = [1=a_question]
</surveypage>

<surveypage b>
/ ontrialbegin = [
    values.page_b_times=concat(concat(values.page_b_times, script.elapsedtime),",");
]
/ questions = [1=b_question]
</surveypage>

<surveypage c>
/ ontrialbegin = [
    values.page_c_times=concat(concat(values.page_c_times, script.elapsedtime),",");
]
/ questions = [1=c_question]
</surveypage>

<radiobuttons a_question>
/ options = ("A1", "A2", "A3")
</radiobuttons>

<radiobuttons b_question>
/ options = ("B1", "B2", "B3")
</radiobuttons>

<radiobuttons c_question>
/ options = ("C1", "C2", "C3")
</radiobuttons>

<summarydata>
/ columns = (script.subjectid script.groupid script.startdate script.starttime values.page_a_times, values.page_b_times, values.page_c_times)
/ separatefiles = true
</summarydata>

Hope this helps!


Thank you Dave.  That looks like it will work out for me.  I appreciate that you provided a script.

I decided to also record the stop times, like this:
<surveypage a>
/ ontrialbegin = [ values.page_a_start_times=concat(concat(values.page_a_start_times, script.elapsedtime),","); ]
/ ontrialend = [ values.page_a_stop_times=concat(concat(values.page_a_stop_times, script.elapsedtime),","); ]
/ questions = [1=a_question]
</surveypage>

That's working fine for me.

Then I put the same start & stop time recording on a Trial displaying an image and the numbers don't quite match the Trial's recorded latency.  Here are some example numbers I'm seeing:
start: 12011, stop: 12361, latency: 306, (stop-start=350)
start: 12378, stop: 12711, latency: 300, (stop-start=333)
start: 12728, stop: 13079, latency: 318, (stop-start=351)
start: 13095, stop: 13429, latency: 304, (stop-start=334)

How should I interpret these three values and the difference between latency and stop-start?

My guess is that the difference is caused by loading times, so the start time is when the stimuli for the Trial begin loading, but the latency doesn't start accruing until the loading is complete and the display is ready to go to the monitor.
If that's the case, then I would be able to tell when the Trial's stimuli were finished loading by subtracting the latency from the stop time.  If that's right, based on the above sample I could be as much as 44 ms closer to the correct stimulus time than just going by the start time.
Even then I am guessing I wouldn't know exactly when the loaded display would be shown on-screen since that's presumably based on video card / display refresh rate.

Thank you.
By Dave - 8/28/2017

beaslera - Saturday, August 26, 2017
beaslera - Tuesday, August 22, 2017
Dave - Monday, August 21, 2017
beaslera - Saturday, August 19, 2017
During part of my experiment, I would like to use checkboxes/radiobuttons and have participants be able to move forward & backward through question pages.
I also want to know when the participant started looking at a new page or went back to look at a previous page.
Surveys can do the former natively, and blocks can do the latter through the cumulative latency of each trial.

Unfortunately, surveys only record one row of data.  So if a participant moves from page 1 to 2 and then back to 1, I wouldn't know the times of those transitions.
Meanwhile, blocks can allow forward & backward movement through trial branching (e.g., having the participant click a picture in a trial that looks like a button and then branch if they did), but checkboxes would be tedious to code the same way (and difficult to edit after coding).
I could put a surveypage into a block, but that doesn't allow for back buttons nor for pictures of buttons that I could use for branching.

Is there a way to get both the things I want?
Perhaps there's a way to continuously record a datapoint and I can record some value that corresponds to the page?
Or some way to record the times that surveypages are shown or changed?

Thank you.

What you can do, potentially, is to run your pages via a <survey> (i.e., you get back and forth navigation between pages) and log the times a given page is displayed in <values>. You can then log those values to a <summarydata> file. In a nutshell:

<values>
/ page_a_times = ""
/ page_b_times = ""
/ page_c_times = ""
</values>

<survey mysurvey>
/ pages = [1=a; 2=b; 3=c]
</survey>

<surveypage a>
/ ontrialbegin = [
    values.page_a_times=concat(concat(values.page_a_times, script.elapsedtime),",");
]
/ questions = [1=a_question]
</surveypage>

<surveypage b>
/ ontrialbegin = [
    values.page_b_times=concat(concat(values.page_b_times, script.elapsedtime),",");
]
/ questions = [1=b_question]
</surveypage>

<surveypage c>
/ ontrialbegin = [
    values.page_c_times=concat(concat(values.page_c_times, script.elapsedtime),",");
]
/ questions = [1=c_question]
</surveypage>

<radiobuttons a_question>
/ options = ("A1", "A2", "A3")
</radiobuttons>

<radiobuttons b_question>
/ options = ("B1", "B2", "B3")
</radiobuttons>

<radiobuttons c_question>
/ options = ("C1", "C2", "C3")
</radiobuttons>

<summarydata>
/ columns = (script.subjectid script.groupid script.startdate script.starttime values.page_a_times, values.page_b_times, values.page_c_times)
/ separatefiles = true
</summarydata>

Hope this helps!


Thank you Dave.  That looks like it will work out for me.  I appreciate that you provided a script.

I decided to also record the stop times, like this:
<surveypage a>
/ ontrialbegin = [ values.page_a_start_times=concat(concat(values.page_a_start_times, script.elapsedtime),","); ]
/ ontrialend = [ values.page_a_stop_times=concat(concat(values.page_a_stop_times, script.elapsedtime),","); ]
/ questions = [1=a_question]
</surveypage>

That's working fine for me.

Then I put the same start & stop time recording on a Trial displaying an image and the numbers don't quite match the Trial's recorded latency.  Here are some example numbers I'm seeing:
start: 12011, stop: 12361, latency: 306, (stop-start=350)
start: 12378, stop: 12711, latency: 300, (stop-start=333)
start: 12728, stop: 13079, latency: 318, (stop-start=351)
start: 13095, stop: 13429, latency: 304, (stop-start=334)

How should I interpret these three values and the difference between latency and stop-start?

My guess is that the difference is caused by loading times, so the start time is when the stimuli for the Trial begin loading, but the latency doesn't start accruing until the loading is complete and the display is ready to go to the monitor.
If that's the case, then I would be able to tell when the Trial's stimuli were finished loading by subtracting the latency from the stop time.  If that's right, based on the above sample I could be as much as 44 ms closer to the correct stimulus time than just going by the start time.
Even then I am guessing I wouldn't know exactly when the loaded display would be shown on-screen since that's presumably based on video card / display refresh rate.

Thank you.

/ontrialbegin is executed before any stimuli are displayed / before the trial's /stimulusframes or /stimulustimes presentation sequence is processed. To be able to draw a stimulus to the screen, Inquisit has to wait for the start of a display refresh cycle (if it were to start drawing in the middle of a cycle, you'd get typical "screen tearing" artifacts). Latency is measured relative to the stimulus display sequence, _not_ relative to when /ontrialbegin was processed, so  that's where most if not all of the discrepancy comes from. /ontrialend, then is executed after response collection / latency measurement has been completed, i.e. it doesn't necessarily perfectly coincide with latency either.

And yes, you are perfectly correct: How large the discrepancy will be will largely depend on system performance, particularly graphics card and display refresh rate.

Hope this helps.
By beaslera - 8/28/2017

Dave - Monday, August 28, 2017
beaslera - Saturday, August 26, 2017
beaslera - Tuesday, August 22, 2017
Dave - Monday, August 21, 2017
beaslera - Saturday, August 19, 2017
During part of my experiment, I would like to use checkboxes/radiobuttons and have participants be able to move forward & backward through question pages.
I also want to know when the participant started looking at a new page or went back to look at a previous page.
Surveys can do the former natively, and blocks can do the latter through the cumulative latency of each trial.

Unfortunately, surveys only record one row of data.  So if a participant moves from page 1 to 2 and then back to 1, I wouldn't know the times of those transitions.
Meanwhile, blocks can allow forward & backward movement through trial branching (e.g., having the participant click a picture in a trial that looks like a button and then branch if they did), but checkboxes would be tedious to code the same way (and difficult to edit after coding).
I could put a surveypage into a block, but that doesn't allow for back buttons nor for pictures of buttons that I could use for branching.

Is there a way to get both the things I want?
Perhaps there's a way to continuously record a datapoint and I can record some value that corresponds to the page?
Or some way to record the times that surveypages are shown or changed?

Thank you.

What you can do, potentially, is to run your pages via a <survey> (i.e., you get back and forth navigation between pages) and log the times a given page is displayed in <values>. You can then log those values to a <summarydata> file. In a nutshell:

<values>
/ page_a_times = ""
/ page_b_times = ""
/ page_c_times = ""
</values>

<survey mysurvey>
/ pages = [1=a; 2=b; 3=c]
</survey>

<surveypage a>
/ ontrialbegin = [
    values.page_a_times=concat(concat(values.page_a_times, script.elapsedtime),",");
]
/ questions = [1=a_question]
</surveypage>

<surveypage b>
/ ontrialbegin = [
    values.page_b_times=concat(concat(values.page_b_times, script.elapsedtime),",");
]
/ questions = [1=b_question]
</surveypage>

<surveypage c>
/ ontrialbegin = [
    values.page_c_times=concat(concat(values.page_c_times, script.elapsedtime),",");
]
/ questions = [1=c_question]
</surveypage>

<radiobuttons a_question>
/ options = ("A1", "A2", "A3")
</radiobuttons>

<radiobuttons b_question>
/ options = ("B1", "B2", "B3")
</radiobuttons>

<radiobuttons c_question>
/ options = ("C1", "C2", "C3")
</radiobuttons>

<summarydata>
/ columns = (script.subjectid script.groupid script.startdate script.starttime values.page_a_times, values.page_b_times, values.page_c_times)
/ separatefiles = true
</summarydata>

Hope this helps!


Thank you Dave.  That looks like it will work out for me.  I appreciate that you provided a script.

I decided to also record the stop times, like this:
<surveypage a>
/ ontrialbegin = [ values.page_a_start_times=concat(concat(values.page_a_start_times, script.elapsedtime),","); ]
/ ontrialend = [ values.page_a_stop_times=concat(concat(values.page_a_stop_times, script.elapsedtime),","); ]
/ questions = [1=a_question]
</surveypage>

That's working fine for me.

Then I put the same start & stop time recording on a Trial displaying an image and the numbers don't quite match the Trial's recorded latency.  Here are some example numbers I'm seeing:
start: 12011, stop: 12361, latency: 306, (stop-start=350)
start: 12378, stop: 12711, latency: 300, (stop-start=333)
start: 12728, stop: 13079, latency: 318, (stop-start=351)
start: 13095, stop: 13429, latency: 304, (stop-start=334)

How should I interpret these three values and the difference between latency and stop-start?

My guess is that the difference is caused by loading times, so the start time is when the stimuli for the Trial begin loading, but the latency doesn't start accruing until the loading is complete and the display is ready to go to the monitor.
If that's the case, then I would be able to tell when the Trial's stimuli were finished loading by subtracting the latency from the stop time.  If that's right, based on the above sample I could be as much as 44 ms closer to the correct stimulus time than just going by the start time.
Even then I am guessing I wouldn't know exactly when the loaded display would be shown on-screen since that's presumably based on video card / display refresh rate.

Thank you.

/ontrialbegin is executed before any stimuli are displayed / before the trial's /stimulusframes or /stimulustimes presentation sequence is processed. To be able to draw a stimulus to the screen, Inquisit has to wait for the start of a display refresh cycle (if it were to start drawing in the middle of a cycle, you'd get typical "screen tearing" artifacts). Latency is measured relative to the stimulus display sequence, _not_ relative to when /ontrialbegin was processed, so  that's where most if not all of the discrepancy comes from. /ontrialend, then is executed after response collection / latency measurement has been completed, i.e. it doesn't necessarily perfectly coincide with latency either.

And yes, you are perfectly correct: How large the discrepancy will be will largely depend on system performance, particularly graphics card and display refresh rate.

Hope this helps.

Ah, ok.  That makes sense.
Unfortunately it means that neither the start time, nor the stop time minus latency, are going to tell me when the stimulus is actually on-screen. I think that's ok for this experiment, but I wish I had access to the stimulus onset/offset times.  There's no way to get access to that graphics refresh clock/signal, is there?

Can I make any assumption about what is onscreen after ontrialend triggers?

For example, I have several back-to-back surveypages with stimulusframes = [1=greybackground, picX] where picX is some picture, and 
<shape greybackground>
/ color = (84, 84, 84)
/ shape = rectangle
/ size = (100%, 100%)
</shape>
While running the study, after clicking the Continue button to change surveypages, I see a flash that seems to be the entire screen going white.  So when does the previous display get replaced with a white screen?  I'm guessing from your response that it is some time after ontrialend, where that delay is a function of screen refresh...and the next surveypage's ontrialbegin may have fired before the screen blank or it may fire after.
Oh, and is there a way to avoid that white flash?

Thanks for your help.
By Dave - 8/28/2017

beaslera - Monday, August 28, 2017
Dave - Monday, August 28, 2017
beaslera - Saturday, August 26, 2017
beaslera - Tuesday, August 22, 2017
Dave - Monday, August 21, 2017
beaslera - Saturday, August 19, 2017
During part of my experiment, I would like to use checkboxes/radiobuttons and have participants be able to move forward & backward through question pages.
I also want to know when the participant started looking at a new page or went back to look at a previous page.
Surveys can do the former natively, and blocks can do the latter through the cumulative latency of each trial.

Unfortunately, surveys only record one row of data.  So if a participant moves from page 1 to 2 and then back to 1, I wouldn't know the times of those transitions.
Meanwhile, blocks can allow forward & backward movement through trial branching (e.g., having the participant click a picture in a trial that looks like a button and then branch if they did), but checkboxes would be tedious to code the same way (and difficult to edit after coding).
I could put a surveypage into a block, but that doesn't allow for back buttons nor for pictures of buttons that I could use for branching.

Is there a way to get both the things I want?
Perhaps there's a way to continuously record a datapoint and I can record some value that corresponds to the page?
Or some way to record the times that surveypages are shown or changed?

Thank you.

What you can do, potentially, is to run your pages via a <survey> (i.e., you get back and forth navigation between pages) and log the times a given page is displayed in <values>. You can then log those values to a <summarydata> file. In a nutshell:

<values>
/ page_a_times = ""
/ page_b_times = ""
/ page_c_times = ""
</values>

<survey mysurvey>
/ pages = [1=a; 2=b; 3=c]
</survey>

<surveypage a>
/ ontrialbegin = [
    values.page_a_times=concat(concat(values.page_a_times, script.elapsedtime),",");
]
/ questions = [1=a_question]
</surveypage>

<surveypage b>
/ ontrialbegin = [
    values.page_b_times=concat(concat(values.page_b_times, script.elapsedtime),",");
]
/ questions = [1=b_question]
</surveypage>

<surveypage c>
/ ontrialbegin = [
    values.page_c_times=concat(concat(values.page_c_times, script.elapsedtime),",");
]
/ questions = [1=c_question]
</surveypage>

<radiobuttons a_question>
/ options = ("A1", "A2", "A3")
</radiobuttons>

<radiobuttons b_question>
/ options = ("B1", "B2", "B3")
</radiobuttons>

<radiobuttons c_question>
/ options = ("C1", "C2", "C3")
</radiobuttons>

<summarydata>
/ columns = (script.subjectid script.groupid script.startdate script.starttime values.page_a_times, values.page_b_times, values.page_c_times)
/ separatefiles = true
</summarydata>

Hope this helps!


Thank you Dave.  That looks like it will work out for me.  I appreciate that you provided a script.

I decided to also record the stop times, like this:
<surveypage a>
/ ontrialbegin = [ values.page_a_start_times=concat(concat(values.page_a_start_times, script.elapsedtime),","); ]
/ ontrialend = [ values.page_a_stop_times=concat(concat(values.page_a_stop_times, script.elapsedtime),","); ]
/ questions = [1=a_question]
</surveypage>

That's working fine for me.

Then I put the same start & stop time recording on a Trial displaying an image and the numbers don't quite match the Trial's recorded latency.  Here are some example numbers I'm seeing:
start: 12011, stop: 12361, latency: 306, (stop-start=350)
start: 12378, stop: 12711, latency: 300, (stop-start=333)
start: 12728, stop: 13079, latency: 318, (stop-start=351)
start: 13095, stop: 13429, latency: 304, (stop-start=334)

How should I interpret these three values and the difference between latency and stop-start?

My guess is that the difference is caused by loading times, so the start time is when the stimuli for the Trial begin loading, but the latency doesn't start accruing until the loading is complete and the display is ready to go to the monitor.
If that's the case, then I would be able to tell when the Trial's stimuli were finished loading by subtracting the latency from the stop time.  If that's right, based on the above sample I could be as much as 44 ms closer to the correct stimulus time than just going by the start time.
Even then I am guessing I wouldn't know exactly when the loaded display would be shown on-screen since that's presumably based on video card / display refresh rate.

Thank you.

/ontrialbegin is executed before any stimuli are displayed / before the trial's /stimulusframes or /stimulustimes presentation sequence is processed. To be able to draw a stimulus to the screen, Inquisit has to wait for the start of a display refresh cycle (if it were to start drawing in the middle of a cycle, you'd get typical "screen tearing" artifacts). Latency is measured relative to the stimulus display sequence, _not_ relative to when /ontrialbegin was processed, so  that's where most if not all of the discrepancy comes from. /ontrialend, then is executed after response collection / latency measurement has been completed, i.e. it doesn't necessarily perfectly coincide with latency either.

And yes, you are perfectly correct: How large the discrepancy will be will largely depend on system performance, particularly graphics card and display refresh rate.

Hope this helps.

Ah, ok.  That makes sense.
Unfortunately it means that neither the start time, nor the stop time minus latency, are going to tell me when the stimulus is actually on-screen. I think that's ok for this experiment, but I wish I had access to the stimulus onset/offset times.  There's no way to get access to that graphics refresh clock/signal, is there?

Can I make any assumption about what is onscreen after ontrialend triggers?

For example, I have several back-to-back surveypages with stimulusframes = [1=greybackground, picX] where picX is some picture, and 
<shape greybackground>
/ color = (84, 84, 84)
/ shape = rectangle
/ size = (100%, 100%)
</shape>
While running the study, after clicking the Continue button to change surveypages, I see a flash that seems to be the entire screen going white.  So when does the previous display get replaced with a white screen?  I'm guessing from your response that it is some time after ontrialend, where that delay is a function of screen refresh...and the next surveypage's ontrialbegin may have fired before the screen blank or it may fire after.
Oh, and is there a way to avoid that white flash?

Thanks for your help.

You can get the onset times by accessing and logging the respective stimulus element's timestamp property:

http://www.millisecond.com/support/docs/v5/html/language/properties/timestamp.htm

In addition, the standard stimulusonset columns as well as the stimulusonset property will tell you a stimulus's onset relative to the start of the given trial's stimulus presentation sequence: http://www.millisecond.com/support/docs/v5/html/language/properties/stimulusonset.htmhttp://www.millisecond.com/support/docs/v5/html/language/attributes/columns.htm

Regarding your <surveypage> question: When a survey page terminates, stimuli are erased, i.e. overwritten in the standard screen color. This will usually take a single refresh cycle.
By beaslera - 8/28/2017

Dave - Monday, August 28, 2017
beaslera - Monday, August 28, 2017
Dave - Monday, August 28, 2017
beaslera - Saturday, August 26, 2017
beaslera - Tuesday, August 22, 2017
Dave - Monday, August 21, 2017
beaslera - Saturday, August 19, 2017
During part of my experiment, I would like to use checkboxes/radiobuttons and have participants be able to move forward & backward through question pages.
I also want to know when the participant started looking at a new page or went back to look at a previous page.
Surveys can do the former natively, and blocks can do the latter through the cumulative latency of each trial.

Unfortunately, surveys only record one row of data.  So if a participant moves from page 1 to 2 and then back to 1, I wouldn't know the times of those transitions.
Meanwhile, blocks can allow forward & backward movement through trial branching (e.g., having the participant click a picture in a trial that looks like a button and then branch if they did), but checkboxes would be tedious to code the same way (and difficult to edit after coding).
I could put a surveypage into a block, but that doesn't allow for back buttons nor for pictures of buttons that I could use for branching.

Is there a way to get both the things I want?
Perhaps there's a way to continuously record a datapoint and I can record some value that corresponds to the page?
Or some way to record the times that surveypages are shown or changed?

Thank you.

What you can do, potentially, is to run your pages via a <survey> (i.e., you get back and forth navigation between pages) and log the times a given page is displayed in <values>. You can then log those values to a <summarydata> file. In a nutshell:

<values>
/ page_a_times = ""
/ page_b_times = ""
/ page_c_times = ""
</values>

<survey mysurvey>
/ pages = [1=a; 2=b; 3=c]
</survey>

<surveypage a>
/ ontrialbegin = [
    values.page_a_times=concat(concat(values.page_a_times, script.elapsedtime),",");
]
/ questions = [1=a_question]
</surveypage>

<surveypage b>
/ ontrialbegin = [
    values.page_b_times=concat(concat(values.page_b_times, script.elapsedtime),",");
]
/ questions = [1=b_question]
</surveypage>

<surveypage c>
/ ontrialbegin = [
    values.page_c_times=concat(concat(values.page_c_times, script.elapsedtime),",");
]
/ questions = [1=c_question]
</surveypage>

<radiobuttons a_question>
/ options = ("A1", "A2", "A3")
</radiobuttons>

<radiobuttons b_question>
/ options = ("B1", "B2", "B3")
</radiobuttons>

<radiobuttons c_question>
/ options = ("C1", "C2", "C3")
</radiobuttons>

<summarydata>
/ columns = (script.subjectid script.groupid script.startdate script.starttime values.page_a_times, values.page_b_times, values.page_c_times)
/ separatefiles = true
</summarydata>

Hope this helps!


Thank you Dave.  That looks like it will work out for me.  I appreciate that you provided a script.

I decided to also record the stop times, like this:
<surveypage a>
/ ontrialbegin = [ values.page_a_start_times=concat(concat(values.page_a_start_times, script.elapsedtime),","); ]
/ ontrialend = [ values.page_a_stop_times=concat(concat(values.page_a_stop_times, script.elapsedtime),","); ]
/ questions = [1=a_question]
</surveypage>

That's working fine for me.

Then I put the same start & stop time recording on a Trial displaying an image and the numbers don't quite match the Trial's recorded latency.  Here are some example numbers I'm seeing:
start: 12011, stop: 12361, latency: 306, (stop-start=350)
start: 12378, stop: 12711, latency: 300, (stop-start=333)
start: 12728, stop: 13079, latency: 318, (stop-start=351)
start: 13095, stop: 13429, latency: 304, (stop-start=334)

How should I interpret these three values and the difference between latency and stop-start?

My guess is that the difference is caused by loading times, so the start time is when the stimuli for the Trial begin loading, but the latency doesn't start accruing until the loading is complete and the display is ready to go to the monitor.
If that's the case, then I would be able to tell when the Trial's stimuli were finished loading by subtracting the latency from the stop time.  If that's right, based on the above sample I could be as much as 44 ms closer to the correct stimulus time than just going by the start time.
Even then I am guessing I wouldn't know exactly when the loaded display would be shown on-screen since that's presumably based on video card / display refresh rate.

Thank you.

/ontrialbegin is executed before any stimuli are displayed / before the trial's /stimulusframes or /stimulustimes presentation sequence is processed. To be able to draw a stimulus to the screen, Inquisit has to wait for the start of a display refresh cycle (if it were to start drawing in the middle of a cycle, you'd get typical "screen tearing" artifacts). Latency is measured relative to the stimulus display sequence, _not_ relative to when /ontrialbegin was processed, so  that's where most if not all of the discrepancy comes from. /ontrialend, then is executed after response collection / latency measurement has been completed, i.e. it doesn't necessarily perfectly coincide with latency either.

And yes, you are perfectly correct: How large the discrepancy will be will largely depend on system performance, particularly graphics card and display refresh rate.

Hope this helps.

Ah, ok.  That makes sense.
Unfortunately it means that neither the start time, nor the stop time minus latency, are going to tell me when the stimulus is actually on-screen. I think that's ok for this experiment, but I wish I had access to the stimulus onset/offset times.  There's no way to get access to that graphics refresh clock/signal, is there?

Can I make any assumption about what is onscreen after ontrialend triggers?

For example, I have several back-to-back surveypages with stimulusframes = [1=greybackground, picX] where picX is some picture, and 
<shape greybackground>
/ color = (84, 84, 84)
/ shape = rectangle
/ size = (100%, 100%)
</shape>
While running the study, after clicking the Continue button to change surveypages, I see a flash that seems to be the entire screen going white.  So when does the previous display get replaced with a white screen?  I'm guessing from your response that it is some time after ontrialend, where that delay is a function of screen refresh...and the next surveypage's ontrialbegin may have fired before the screen blank or it may fire after.
Oh, and is there a way to avoid that white flash?

Thanks for your help.

You can get the onset times by accessing and logging the respective stimulus element's timestamp property:

http://www.millisecond.com/support/docs/v5/html/language/properties/timestamp.htm

In addition, the standard stimulusonset columns as well as the stimulusonset property will tell you a stimulus's onset relative to the start of the given trial's stimulus presentation sequence: http://www.millisecond.com/support/docs/v5/html/language/properties/stimulusonset.htmhttp://www.millisecond.com/support/docs/v5/html/language/attributes/columns.htm

Regarding your <surveypage> question: When a survey page terminates, stimuli are erased, i.e. overwritten in the standard screen color. This will usually take a single refresh cycle.

Oh, I see.
Thanks.