Millisecond Forums

Fault in Probabilistic Reversal Learning Task?

https://forums.millisecond.com/Topic27612.aspx

By SabRV - 8/15/2019

Dear all,

i have a question regarding the PRL:
We noticed that in some cases, the "point counter" of the task stays on the same amount of points collected, when a participant chose the incorrect pattern after a reversal has taken place. Even though this counts as an incorrect choice, the points do not go down (see the data attachment, starting at trial number 833, point counter is on 88). This problem seems to happen only if someone consistently gives an incorrect answer after a reversal took place, so the response category is always RE (reversal error).
Do i maybe understand the task not correctly? I thought that whenever you get the feedback "correct" (even if its actually not correct) you get a point, and vise-versa, you get the feedback "incorrect", you lose points. 
I am attaching the script aswell.

Thank you so much in advance!

Sabrina
By Dave - 8/15/2019

SabRV - 8/16/2019
Dear all,

i have a question regarding the PRL:
We noticed that in some cases, the "point counter" of the task stays on the same amount of points collected, when a participant chose the incorrect pattern after a reversal has taken place. Even though this counts as an incorrect choice, the points do not go down (see the data attachment, starting at trial number 833, point counter is on 88). This problem seems to happen only if someone consistently gives an incorrect answer after a reversal took place, so the response category is always RE (reversal error).
Do i maybe understand the task not correctly? I thought that whenever you get the feedback "correct" (even if its actually not correct) you get a point, and vise-versa, you get the feedback "incorrect", you lose points. 
I am attaching the script aswell.

Thank you so much in advance!

Sabrina

As far as I know, this is a deliberate task feature, but I will double-check.
By Throughput - 8/16/2019

Thank you for pointing this out.  I will need to learn the diagnosis of this potential PRL issue.  I need to bring up another issue I have regarding the PRL.  We present the PRL as the first in a batch file with a total of eight tasks.  Our batch file randomly aborts after the PRL only.  If I had to estimate, I would say this occurs one in fifteen runs of the batch file.  There appears to be no consistency.  The only viable explanation I could come up with is a cache overload.  We purposefully close and re-open Inquisit 5 prior to running this batch script to clear the cache.  Any thoughts would be appreciated.  Thanks, Rich
By Dave - 8/16/2019

DickyBoy - 8/16/2019
Thank you for pointing this out.  I will need to learn the diagnosis of this potential PRL issue.  I need to bring up another issue I have regarding the PRL.  We present the PRL as the first in a batch file with a total of eight tasks.  Our batch file randomly aborts after the PRL only.  If I had to estimate, I would say this occurs one in fifteen runs of the batch file.  There appears to be no consistency.  The only viable explanation I could come up with is a cache overload.  We purposefully close and re-open Inquisit 5 prior to running this batch script to clear the cache.  Any thoughts would be appreciated.  Thanks, Rich

It's not overload; there is a script.abort() condition in the PRL (by design), and that will by default terminate the entire batch of scripts.

<block abortTask>
/ trials = [1 = finish]
/ onblockend = [
values.abort = 1;
script.abort();
]
</block>

If you DON'T want to terminate the entire batch, change the above to

<block abortTask>
/ trials = [1 = finish]
/ onblockend = [
values.abort = 1;
script.abort(false);
]
</block>

(Also see https://www.millisecond.com/forums/Topic26941.aspx )
By Throughput - 8/16/2019

Dave - 8/16/2019
DickyBoy - 8/16/2019
Thank you for pointing this out.  I will need to learn the diagnosis of this potential PRL issue.  I need to bring up another issue I have regarding the PRL.  We present the PRL as the first in a batch file with a total of eight tasks.  Our batch file randomly aborts after the PRL only.  If I had to estimate, I would say this occurs one in fifteen runs of the batch file.  There appears to be no consistency.  The only viable explanation I could come up with is a cache overload.  We purposefully close and re-open Inquisit 5 prior to running this batch script to clear the cache.  Any thoughts would be appreciated.  Thanks, Rich

It's not overload; there is a script.abort() condition in the PRL (by design), and that will by default terminate the entire batch of scripts.

<block abortTask>
/ trials = [1 = finish]
/ onblockend = [
values.abort = 1;
script.abort();
]
</block>

If you DON'T want to terminate the entire batch, change the above to

<block abortTask>
/ trials = [1 = finish]
/ onblockend = [
values.abort = 1;
script.abort(false);
]
</block>

(Also see https://www.millisecond.com/forums/Topic26941.aspx )

Thank you so much Dave!
By SabRV - 8/16/2019

Dave - 8/16/2019
SabRV - 8/16/2019
Dear all,

i have a question regarding the PRL:
We noticed that in some cases, the "point counter" of the task stays on the same amount of points collected, when a participant chose the incorrect pattern after a reversal has taken place. Even though this counts as an incorrect choice, the points do not go down (see the data attachment, starting at trial number 833, point counter is on 88). This problem seems to happen only if someone consistently gives an incorrect answer after a reversal took place, so the response category is always RE (reversal error).
Do i maybe understand the task not correctly? I thought that whenever you get the feedback "correct" (even if its actually not correct) you get a point, and vise-versa, you get the feedback "incorrect", you lose points. 
I am attaching the script aswell.

Thank you so much in advance!

Sabrina

As far as I know, this is a deliberate task feature, but I will double-check.

Thanks Dave, I'll await your answer then :)
By Dave - 8/18/2019

SabRV - 8/17/2019
Dave - 8/16/2019
SabRV - 8/16/2019
Dear all,

i have a question regarding the PRL:
We noticed that in some cases, the "point counter" of the task stays on the same amount of points collected, when a participant chose the incorrect pattern after a reversal has taken place. Even though this counts as an incorrect choice, the points do not go down (see the data attachment, starting at trial number 833, point counter is on 88). This problem seems to happen only if someone consistently gives an incorrect answer after a reversal took place, so the response category is always RE (reversal error).
Do i maybe understand the task not correctly? I thought that whenever you get the feedback "correct" (even if its actually not correct) you get a point, and vise-versa, you get the feedback "incorrect", you lose points. 
I am attaching the script aswell.

Thank you so much in advance!

Sabrina

As far as I know, this is a deliberate task feature, but I will double-check.

Thanks Dave, I'll await your answer then :)

Katja, who wrote the PRL implementation, went back through her development notes and now tends to believe that the lack of point deduction after incorrect responses on reversal ("reversal error", "RE") was in fact a mistake. We'll be updating the library script shortly after some additional checks, and then I will gladly assist with transfering the changes to your own, translated script.
By SabRV - 8/18/2019

Dave - 8/19/2019
SabRV - 8/17/2019
Dave - 8/16/2019
SabRV - 8/16/2019
Dear all,

i have a question regarding the PRL:
We noticed that in some cases, the "point counter" of the task stays on the same amount of points collected, when a participant chose the incorrect pattern after a reversal has taken place. Even though this counts as an incorrect choice, the points do not go down (see the data attachment, starting at trial number 833, point counter is on 88). This problem seems to happen only if someone consistently gives an incorrect answer after a reversal took place, so the response category is always RE (reversal error).
Do i maybe understand the task not correctly? I thought that whenever you get the feedback "correct" (even if its actually not correct) you get a point, and vise-versa, you get the feedback "incorrect", you lose points. 
I am attaching the script aswell.

Thank you so much in advance!

Sabrina

As far as I know, this is a deliberate task feature, but I will double-check.

Thanks Dave, I'll await your answer then :)

Katja, who wrote the PRL implementation, went back through her development notes and now tends to believe that the lack of point deduction after incorrect responses on reversal ("reversal error", "RE") was in fact a mistake. We'll be updating the library script shortly after some additional checks, and then I will gladly assist with transfering the changes to your own, translated script.

Thank you so much for your answer Dave! Let me know when you guys have adapted the script, we would like to start with the data collection soon :)
By Dave - 8/29/2019

SabRV - 8/19/2019

Thank you so much for your answer Dave! Let me know when you guys have adapted the script, we would like to start with the data collection soon :)

The updated script is available in the library now ( https://www.millisecond.com/download/library/reversallearning/probabilisticreversallearningtask/ ). The relevant change regarding the treatment of response category RE (reversal error) is below, in <trial choice>:

Previous (faulty) version:

<trial choice>
/ ontrialbegin = [

if (values.countConsecutiveCorrect == values.maxCorrectChoices){
list.ICFeedback.appenditem(values.countICFeedback);
values.reversal = 1;
values.countConsecutiveCorrect = 0;
values.helper = values.index_correctChoice;
values.index_correctChoice = values.index_incorrectChoice;
values.index_incorrectChoice = values.helper;
values.maxCorrectChoices = list.reversals.nextvalue;
if (monkey.monkeymode){
values.maxCorrectChoices = 4;
};
values.countReversals += 1;
values.relearned = 0;
values.countICFeedback = 0;
} else {
values.reversal = 0;
};

values.counttrials += 1;
picture.correctStim.hposition = list.hpositions.nextvalue;
if (picture.correctStim.hposition == "25pct"){
values.correctChoicePosition = 1;
values.correctKey = parameters.leftKey;
values.incorrectKey = parameters.rightKey;
} else {
values.correctChoicePosition = 2;
values.correctKey = parameters.rightKey;
values.incorrectKey = parameters.leftKey;
};
picture.incorrectStim.hposition = list.hpositions.nextvalue;

picture.correctStim_practice.hposition = picture.correctStim.hposition;
picture.incorrectStim_practice.hposition = picture.incorrectStim.hposition;

]
/ stimulustimes = [0 = correctStim, correctStim_practice, incorrectStim, incorrectStim_practice, total]
/ timeout = parameters.maxStimDuration
/ validresponse = (parameters.leftKey, parameters.rightKey)
/ correctresponse = (values.correctKey)
/ ontrialend = [
if (trial.choice.correct){
values.feedback = list.correctChoiceFeedback.nextvalue;

if (values.reversal == 1){
values.respCategory = "lucky guess";
values.countLG += 1;
} else if (values.relearned == 0){
values.relearned = 1;
values.respCategory = "C-RE";
values.countC += 1;
} else {
values.respCategory = "C";
values.countC += 1;
};

if (values.feedback == 1){
values.countICFeedback += 1;
if (values.respCategory == "lucky guess"){
values.respCategory = "lucky guess (PE)";
} else if (values.respCategory == "C-RE"){
values.respCategory = "C-RE (PE)";
} else {
values.respCategory = "PE";
};
values.totalPoints -= 1;
} else {
values.totalPoints += 1;
};

values.countConsecutiveCorrect += 1;


} else if (trial.choice.response == values.incorrectKey){
values.feedback = list.incorrectChoiceFeedback.nextvalue;

if (values.relearned == 0){
values.respCategory = "RE";
values.countRE += 1;
} else {
values.countE += 1;
if (values.feedback == 2){
values.respCategory = "E-PE";
values.totalPoints += 1;
} else {
values.respCategory = "E";
values.totalPoints -= 1;
};
};

values.countConsecutiveCorrect = 0;
} else {
values.feedback = 1;
values.respCategory = "NR";
values.countConsecutiveCorrect = 0;
values.countNR += 1;
values.totalPoints -= 1;
};

values.iti = parameters.SOA - trial.choice.latency - parameters.feedbackDuration;

if (values.countConsecutiveCorrect == values.maxCorrectChoices){
if (values.countReversals == values.numberReversals){
values.stop = 1;
};
};
]
/ branch = [
trial.feedback;
]
</trial>

Current (updated) version:

<trial choice>
/ ontrialbegin = [

if (values.countConsecutiveCorrect == values.maxCorrectChoices){
list.ICFeedback.appenditem(values.countICFeedback);
values.reversal = 1;
values.countConsecutiveCorrect = 0;
values.helper = values.index_correctChoice;
values.index_correctChoice = values.index_incorrectChoice;
values.index_incorrectChoice = values.helper;
values.maxCorrectChoices = list.reversals.nextvalue;
if (monkey.monkeymode){
values.maxCorrectChoices = 4;
};
values.countReversals += 1;
values.relearned = 0;
values.countICFeedback = 0;
} else {
values.reversal = 0;
};

values.counttrials += 1;
picture.correctStim.hposition = list.hpositions.nextvalue;
if (picture.correctStim.hposition == "25pct"){
values.correctChoicePosition = 1;
values.correctKey = parameters.leftKey;
values.incorrectKey = parameters.rightKey;
} else {
values.correctChoicePosition = 2;
values.correctKey = parameters.rightKey;
values.incorrectKey = parameters.leftKey;
};
picture.incorrectStim.hposition = list.hpositions.nextvalue;

picture.correctStim_practice.hposition = picture.correctStim.hposition;
picture.incorrectStim_practice.hposition = picture.incorrectStim.hposition;

]
/ stimulustimes = [0 = correctStim, correctStim_practice, incorrectStim, incorrectStim_practice, total]
/ timeout = parameters.maxStimDuration
/ validresponse = (parameters.leftKey, parameters.rightKey)
/ correctresponse = (values.correctKey)
/ ontrialend = [
if (trial.choice.correct){
values.feedback = list.correctChoiceFeedback.nextvalue;

if (values.reversal == 1){
values.respCategory = "lucky guess";
values.countLG += 1;
} else if (values.relearned == 0){
values.relearned = 1;
values.respCategory = "C-RE";
values.countC += 1;
} else {
values.respCategory = "C";
values.countC += 1;
};

if (values.feedback == 1){
values.countICFeedback += 1;
if (values.respCategory == "lucky guess"){
values.respCategory = "lucky guess (PE)";
} else if (values.respCategory == "C-RE"){
values.respCategory = "C-RE (PE)";
} else {
values.respCategory = "PE";
};
values.totalPoints -= 1;
} else {
values.totalPoints += 1;
};

values.countConsecutiveCorrect += 1;


} else if (trial.choice.response == values.incorrectKey){
values.feedback = list.incorrectChoiceFeedback.nextvalue;

if (values.relearned == 0){
values.respCategory = "RE";
values.countRE += 1;
if (values.feedback == 2){
values.totalPoints += 1;
} else {
values.totalPoints -= 1;
};

} else {
values.countE += 1;
if (values.feedback == 2){
values.respCategory = "E-PE";
values.totalPoints += 1;
} else {
values.respCategory = "E";
values.totalPoints -= 1;
};
};

values.countConsecutiveCorrect = 0;
} else {
values.feedback = 1;
values.respCategory = "NR";
values.countConsecutiveCorrect = 0;
values.countNR += 1;
values.totalPoints -= 1;
};

values.iti = parameters.SOA - trial.choice.latency - parameters.feedbackDuration;

if (values.countConsecutiveCorrect == values.maxCorrectChoices){
if (values.countReversals == values.numberReversals){
values.stop = 1;
};
};
]
/ branch = [
trial.feedback;
]
</trial>