Dave
|
|
Group: Administrators
Posts: 13K,
Visits: 104K
|
+x+x+x+xHere's the thing. There are at least two different ways to read your explanation and two different structures result. Suppose for the sake of illustration that your targets are A to D as in <item targets> / 1 = "A" / 2 = "B" / 3 = "C" / 4 = "D" </item> One possible data structure is to have the items as columns, with one variable per item (order) indicating the ordinal position in which the item appeared (1 if it was the 1st target seen by this participant, 2 if it was the 2nd target seen, etc.) and another variable per item (acc) indicating accuracy (1 if recognized, 0 if not). Another possible data structure is to order by targets 1 to 4, with one variable indicating the item seen as 1st target, 2nd target, etc. and one variable for the accuracy each. Both examples above are based on the exact same data and contain the same information, but the structure is totally different; what's also totally different is the programming required to get one type of structure versus the other. That is way details and specificity matter. I think that the first structure is more of what we're looking for. As long as the set up doesn't mess up with the randomization of the items. Ok. If you want me to program this for you, please provide the images and any other files the script requires to run.Edited: Since I don't have the files and cannot run the script, the version attached is untested. I just added all the materials to this google folder and made the link shareable. There are 600+ pictures and I couldn't add them here. There are also a few scripts in the folder but the one that needs the changes is the one names KM_Full_EmotionTask_v40 https://drive.google.com/drive/folders/1rJGsELdo02UgyiXCJ-T3oobzIJCKuX3f?usp=sharing A modified script is attached to my previous reply. You can use it to test the changes
|
|
|
DSaraqini
|
|
Group: Forum Members
Posts: 21,
Visits: 59
|
+x+x+xHere's the thing. There are at least two different ways to read your explanation and two different structures result. Suppose for the sake of illustration that your targets are A to D as in <item targets> / 1 = "A" / 2 = "B" / 3 = "C" / 4 = "D" </item> One possible data structure is to have the items as columns, with one variable per item (order) indicating the ordinal position in which the item appeared (1 if it was the 1st target seen by this participant, 2 if it was the 2nd target seen, etc.) and another variable per item (acc) indicating accuracy (1 if recognized, 0 if not). Another possible data structure is to order by targets 1 to 4, with one variable indicating the item seen as 1st target, 2nd target, etc. and one variable for the accuracy each. Both examples above are based on the exact same data and contain the same information, but the structure is totally different; what's also totally different is the programming required to get one type of structure versus the other. That is way details and specificity matter. I think that the first structure is more of what we're looking for. As long as the set up doesn't mess up with the randomization of the items. Ok. If you want me to program this for you, please provide the images and any other files the script requires to run.Edited: Since I don't have the files and cannot run the script, the version attached is untested. I just added all the materials to this google folder and made the link shareable. There are 600+ pictures and I couldn't add them here. There are also a few scripts in the folder but the one that needs the changes is the one names KM_Full_EmotionTask_v40 https://drive.google.com/drive/folders/1rJGsELdo02UgyiXCJ-T3oobzIJCKuX3f?usp=sharing
|
|
|
Dave
|
|
Group: Administrators
Posts: 13K,
Visits: 104K
|
+x+xHere's the thing. There are at least two different ways to read your explanation and two different structures result. Suppose for the sake of illustration that your targets are A to D as in <item targets> / 1 = "A" / 2 = "B" / 3 = "C" / 4 = "D" </item> One possible data structure is to have the items as columns, with one variable per item (order) indicating the ordinal position in which the item appeared (1 if it was the 1st target seen by this participant, 2 if it was the 2nd target seen, etc.) and another variable per item (acc) indicating accuracy (1 if recognized, 0 if not). Another possible data structure is to order by targets 1 to 4, with one variable indicating the item seen as 1st target, 2nd target, etc. and one variable for the accuracy each. Both examples above are based on the exact same data and contain the same information, but the structure is totally different; what's also totally different is the programming required to get one type of structure versus the other. That is way details and specificity matter. I think that the first structure is more of what we're looking for. As long as the set up doesn't mess up with the randomization of the items. Ok. If you want me to program this for you, please provide the images and any other files the script requires to run.Edited: Since I don't have the files and cannot run the script, the version attached is untested.
|
|
|
DSaraqini
|
|
Group: Forum Members
Posts: 21,
Visits: 59
|
+x+x+x+x+x+x+xIn our study we: - show 5 target missing persons,
- have some practice trials and the main task where participants have to choose if the face is neutral (N) or happy (Y) and if it's any of the missing persons then they have to press 'Q'
- then a face recognition task for the missing persons
We were wondering if there's a way to get performance on the first target seen in summary data, first two targets seen, and last two targets seen. So can we record accuracy data from individual missing persons trials and in order. I have attached all of our code to this post. And by "performance" you mean what exactly? Sorry my bad. Performance would be how accurate they were at recognizing the first target, the first two targets, etc... Is that helpful enough? It is not, I'm afraid. In particular, it's not clear to me what you consider "targets", specifically whether that is supposed to include the practice missing person or not. I'll note, too, that it seems like in some conditions ("Practice_outWM"), participants will not be exposed to the practice missing person again, so there cannot be any performace attached to it. Other parts are confusing as well, such as why this "V1" trial presents the "V2" stimulus <trial MissingPerson_Poster V1> / stimulustimes = [0=ready; 500=MissingPerson V2] / trialduration = 15500 / validresponse = () / correctresponse = () / beginresponsetime = 15500 </trial> and this "V2" trial presents the "V1" one: <trial MissingPerson_Poster V2> / stimulustimes = [0=ready; 500=MissingPerson V1] / trialduration = 15500 / validresponse = () / correctresponse = () / beginresponsetime = 15500 </trial> With respect to my question about what you consider "performance," you stated: "Performance would be how accurate they were at recognizing the first target, the first two targets, etc..." But that is ambiguous. You have two parts in the procedure that can be construed as requiring "recognizing the target": (1) the main task where participants have to choose if the face is neutral (N) or happy (Y) and if it's any of the missing persons then they have to press 'Q', and (2) a face recognition task for the missing persons. Are you referring to one, the other, or both? Finally (for now), I'm having trouble parsing "performance on the first target seen in summary data, first two targets seen, and last two targets seen." Would that mean that performance on target 1 is in effect considered twice? Target 1 performance and average (?) of Target 1 + Target 2 performace? And, what, exactly do you even consider the "1st target seen"? Is it supposed to be the 1st stimulus shown during the "MissingPerson_Posters" block, or is the "1st target seen" the stimulus shown by the 1st instance of the "MissingPerson_Vx" trial during the "MainTaskVx" block? I really need you to be way more precise (and please refer to things by the name they appear by in the actual script / code). The targets are not supposed to include the practices, just the ones from the main task (so each of MissingPerson_V1 and MissingPerson_V2). I am not quite sure on your question regarding the "V1" trial presenting the "V2" stimulus (and vice versa) because it's a part of the script that another researcher worked on. I just know that it's supposed to be that way because of some counterbalancing By performance we mean how accurate participants were at recognizing (indicated by them pressing 'Q') the targets on the main task. We are repeating targets just because we aren't sure which we want to look at. We want to know if we can record accuracy for each target by where they were presented (1st presented, 2nd presented, etc.) in the main task. E.g., let's say that participant 1 got to the main task and is now looking at the pictures and trying to judge them by emotion while also being on the lookout for the targets. While doing the task the participant noticed the first target which happens to be "/1 = "043_m_f_h_a (2).jpg" (since they noticed and pressed 'Q' they were accurate for this target), after a few more pictures the second target appears "/3 = "176_o_m_n_a (2).jpg" but they don't notice so their performance on this target is not as good. Then we want to see the performance of participant 2, who happened to get target "/3 = "176_o_m_n_a (2).jpg" first and later on target "/1 = "043_m_f_h_a (2).jpg" . This way we can then check the level of accuracy on the targets depending on the order they were presented on the main task. In other words, are participants only noticing target "/1 = "043_m_f_h_a (2).jpg" when it's presented as the first picture before the other targets or are they fairly accurate at that target regardless of the order > I am not quite sure on your question regarding the "V1" trial presenting the "V2" stimulus (and vice versa) because it's a part of the script that another researcher worked on. I just know that it's supposed to be that way because of some counterbalancing. I'd recommend double-checking with the other researcher. It looks like a mistake & not related to counterbalancing. Your lengthy explanation re. accuracy helps some, but it does not actually answer my questions. Again, please specifically respond to: > Finally (for now), I'm having trouble parsing "performance on the first target seen in summary data, first two targets seen, and last two targets seen." Would that mean that performance on target 1 is in effect considered twice? Target 1 performance and average (?) of Target 1 + Target 2 performace? and > And, what, exactly do you even consider the "1st target seen"? Is it supposed to be the 1st stimulus shown during the "MissingPerson_Posters" block, or is the "1st target seen" the stimulus shown by the 1st instance of the "MissingPerson_Vx" trial during the "MainTaskVx" block? If it's easier for you, take the two examples you've given in your last response and give the exact data output you'd want for each. Here's the thing. There are at least two different ways to read your explanation and two different structures result. Suppose for the sake of illustration that your targets are A to D as in <item targets> / 1 = "A" / 2 = "B" / 3 = "C" / 4 = "D" </item> One possible data structure is to have the items as columns, with one variable per item (order) indicating the ordinal position in which the item appeared (1 if it was the 1st target seen by this participant, 2 if it was the 2nd target seen, etc.) and another variable per item (acc) indicating accuracy (1 if recognized, 0 if not). Another possible data structure is to order by targets 1 to 4, with one variable indicating the item seen as 1st target, 2nd target, etc. and one variable for the accuracy each. Both examples above are based on the exact same data and contain the same information, but the structure is totally different; what's also totally different is the programming required to get one type of structure versus the other. That is way details and specificity matter. I think that the first structure is more of what we're looking for. As long as the set up doesn't mess up with the randomization of the items.
|
|
|
Dave
|
|
Group: Administrators
Posts: 13K,
Visits: 104K
|
+x+x+x+x+x+xIn our study we: - show 5 target missing persons,
- have some practice trials and the main task where participants have to choose if the face is neutral (N) or happy (Y) and if it's any of the missing persons then they have to press 'Q'
- then a face recognition task for the missing persons
We were wondering if there's a way to get performance on the first target seen in summary data, first two targets seen, and last two targets seen. So can we record accuracy data from individual missing persons trials and in order. I have attached all of our code to this post. And by "performance" you mean what exactly? Sorry my bad. Performance would be how accurate they were at recognizing the first target, the first two targets, etc... Is that helpful enough? It is not, I'm afraid. In particular, it's not clear to me what you consider "targets", specifically whether that is supposed to include the practice missing person or not. I'll note, too, that it seems like in some conditions ("Practice_outWM"), participants will not be exposed to the practice missing person again, so there cannot be any performace attached to it. Other parts are confusing as well, such as why this "V1" trial presents the "V2" stimulus <trial MissingPerson_Poster V1> / stimulustimes = [0=ready; 500=MissingPerson V2] / trialduration = 15500 / validresponse = () / correctresponse = () / beginresponsetime = 15500 </trial> and this "V2" trial presents the "V1" one: <trial MissingPerson_Poster V2> / stimulustimes = [0=ready; 500=MissingPerson V1] / trialduration = 15500 / validresponse = () / correctresponse = () / beginresponsetime = 15500 </trial> With respect to my question about what you consider "performance," you stated: "Performance would be how accurate they were at recognizing the first target, the first two targets, etc..." But that is ambiguous. You have two parts in the procedure that can be construed as requiring "recognizing the target": (1) the main task where participants have to choose if the face is neutral (N) or happy (Y) and if it's any of the missing persons then they have to press 'Q', and (2) a face recognition task for the missing persons. Are you referring to one, the other, or both? Finally (for now), I'm having trouble parsing "performance on the first target seen in summary data, first two targets seen, and last two targets seen." Would that mean that performance on target 1 is in effect considered twice? Target 1 performance and average (?) of Target 1 + Target 2 performace? And, what, exactly do you even consider the "1st target seen"? Is it supposed to be the 1st stimulus shown during the "MissingPerson_Posters" block, or is the "1st target seen" the stimulus shown by the 1st instance of the "MissingPerson_Vx" trial during the "MainTaskVx" block? I really need you to be way more precise (and please refer to things by the name they appear by in the actual script / code). The targets are not supposed to include the practices, just the ones from the main task (so each of MissingPerson_V1 and MissingPerson_V2). I am not quite sure on your question regarding the "V1" trial presenting the "V2" stimulus (and vice versa) because it's a part of the script that another researcher worked on. I just know that it's supposed to be that way because of some counterbalancing By performance we mean how accurate participants were at recognizing (indicated by them pressing 'Q') the targets on the main task. We are repeating targets just because we aren't sure which we want to look at. We want to know if we can record accuracy for each target by where they were presented (1st presented, 2nd presented, etc.) in the main task. E.g., let's say that participant 1 got to the main task and is now looking at the pictures and trying to judge them by emotion while also being on the lookout for the targets. While doing the task the participant noticed the first target which happens to be "/1 = "043_m_f_h_a (2).jpg" (since they noticed and pressed 'Q' they were accurate for this target), after a few more pictures the second target appears "/3 = "176_o_m_n_a (2).jpg" but they don't notice so their performance on this target is not as good. Then we want to see the performance of participant 2, who happened to get target "/3 = "176_o_m_n_a (2).jpg" first and later on target "/1 = "043_m_f_h_a (2).jpg" . This way we can then check the level of accuracy on the targets depending on the order they were presented on the main task. In other words, are participants only noticing target "/1 = "043_m_f_h_a (2).jpg" when it's presented as the first picture before the other targets or are they fairly accurate at that target regardless of the order > I am not quite sure on your question regarding the "V1" trial presenting the "V2" stimulus (and vice versa) because it's a part of the script that another researcher worked on. I just know that it's supposed to be that way because of some counterbalancing. I'd recommend double-checking with the other researcher. It looks like a mistake & not related to counterbalancing. Your lengthy explanation re. accuracy helps some, but it does not actually answer my questions. Again, please specifically respond to: > Finally (for now), I'm having trouble parsing "performance on the first target seen in summary data, first two targets seen, and last two targets seen." Would that mean that performance on target 1 is in effect considered twice? Target 1 performance and average (?) of Target 1 + Target 2 performace? and > And, what, exactly do you even consider the "1st target seen"? Is it supposed to be the 1st stimulus shown during the "MissingPerson_Posters" block, or is the "1st target seen" the stimulus shown by the 1st instance of the "MissingPerson_Vx" trial during the "MainTaskVx" block? If it's easier for you, take the two examples you've given in your last response and give the exact data output you'd want for each. Here's the thing. There are at least two different ways to read your explanation and two different structures result. Suppose for the sake of illustration that your targets are A to D as in <item targets> / 1 = "A" / 2 = "B" / 3 = "C" / 4 = "D" </item> One possible data structure is to have the items as columns, with one variable per item (order) indicating the ordinal position in which the item appeared (1 if it was the 1st target seen by this participant, 2 if it was the 2nd target seen, etc.) and another variable per item (acc) indicating accuracy (1 if recognized, 0 if not). Another possible data structure is to order by targets 1 to 4, with one variable indicating the item seen as 1st target, 2nd target, etc. and one variable for the accuracy each. Both examples above are based on the exact same data and contain the same information, but the structure is totally different; what's also totally different is the programming required to get one type of structure versus the other. That is way details and specificity matter.
|
|
|
Dave
|
|
Group: Administrators
Posts: 13K,
Visits: 104K
|
+x+x+x+x+xIn our study we: - show 5 target missing persons,
- have some practice trials and the main task where participants have to choose if the face is neutral (N) or happy (Y) and if it's any of the missing persons then they have to press 'Q'
- then a face recognition task for the missing persons
We were wondering if there's a way to get performance on the first target seen in summary data, first two targets seen, and last two targets seen. So can we record accuracy data from individual missing persons trials and in order. I have attached all of our code to this post. And by "performance" you mean what exactly? Sorry my bad. Performance would be how accurate they were at recognizing the first target, the first two targets, etc... Is that helpful enough? It is not, I'm afraid. In particular, it's not clear to me what you consider "targets", specifically whether that is supposed to include the practice missing person or not. I'll note, too, that it seems like in some conditions ("Practice_outWM"), participants will not be exposed to the practice missing person again, so there cannot be any performace attached to it. Other parts are confusing as well, such as why this "V1" trial presents the "V2" stimulus <trial MissingPerson_Poster V1> / stimulustimes = [0=ready; 500=MissingPerson V2] / trialduration = 15500 / validresponse = () / correctresponse = () / beginresponsetime = 15500 </trial> and this "V2" trial presents the "V1" one: <trial MissingPerson_Poster V2> / stimulustimes = [0=ready; 500=MissingPerson V1] / trialduration = 15500 / validresponse = () / correctresponse = () / beginresponsetime = 15500 </trial> With respect to my question about what you consider "performance," you stated: "Performance would be how accurate they were at recognizing the first target, the first two targets, etc..." But that is ambiguous. You have two parts in the procedure that can be construed as requiring "recognizing the target": (1) the main task where participants have to choose if the face is neutral (N) or happy (Y) and if it's any of the missing persons then they have to press 'Q', and (2) a face recognition task for the missing persons. Are you referring to one, the other, or both? Finally (for now), I'm having trouble parsing "performance on the first target seen in summary data, first two targets seen, and last two targets seen." Would that mean that performance on target 1 is in effect considered twice? Target 1 performance and average (?) of Target 1 + Target 2 performace? And, what, exactly do you even consider the "1st target seen"? Is it supposed to be the 1st stimulus shown during the "MissingPerson_Posters" block, or is the "1st target seen" the stimulus shown by the 1st instance of the "MissingPerson_Vx" trial during the "MainTaskVx" block? I really need you to be way more precise (and please refer to things by the name they appear by in the actual script / code). The targets are not supposed to include the practices, just the ones from the main task (so each of MissingPerson_V1 and MissingPerson_V2). I am not quite sure on your question regarding the "V1" trial presenting the "V2" stimulus (and vice versa) because it's a part of the script that another researcher worked on. I just know that it's supposed to be that way because of some counterbalancing By performance we mean how accurate participants were at recognizing (indicated by them pressing 'Q') the targets on the main task. We are repeating targets just because we aren't sure which we want to look at. We want to know if we can record accuracy for each target by where they were presented (1st presented, 2nd presented, etc.) in the main task. E.g., let's say that participant 1 got to the main task and is now looking at the pictures and trying to judge them by emotion while also being on the lookout for the targets. While doing the task the participant noticed the first target which happens to be "/1 = "043_m_f_h_a (2).jpg" (since they noticed and pressed 'Q' they were accurate for this target), after a few more pictures the second target appears "/3 = "176_o_m_n_a (2).jpg" but they don't notice so their performance on this target is not as good. Then we want to see the performance of participant 2, who happened to get target "/3 = "176_o_m_n_a (2).jpg" first and later on target "/1 = "043_m_f_h_a (2).jpg" . This way we can then check the level of accuracy on the targets depending on the order they were presented on the main task. In other words, are participants only noticing target "/1 = "043_m_f_h_a (2).jpg" when it's presented as the first picture before the other targets or are they fairly accurate at that target regardless of the order > I am not quite sure on your question regarding the "V1" trial presenting the "V2" stimulus (and vice versa) because it's a part of the script that another researcher worked on. I just know that it's supposed to be that way because of some counterbalancing. I'd recommend double-checking with the other researcher. It looks like a mistake & not related to counterbalancing. Your lengthy explanation re. accuracy helps some, but it does not actually answer my questions. Again, please specifically respond to: > Finally (for now), I'm having trouble parsing "performance on the first target seen in summary data, first two targets seen, and last two targets seen." Would that mean that performance on target 1 is in effect considered twice? Target 1 performance and average (?) of Target 1 + Target 2 performace? and > And, what, exactly do you even consider the "1st target seen"? Is it supposed to be the 1st stimulus shown during the "MissingPerson_Posters" block, or is the "1st target seen" the stimulus shown by the 1st instance of the "MissingPerson_Vx" trial during the "MainTaskVx" block? If it's easier for you, take the two examples you've given in your last response and give the exact data output you'd want for each.
|
|
|
DSaraqini
|
|
Group: Forum Members
Posts: 21,
Visits: 59
|
+x+x+x+xIn our study we: - show 5 target missing persons,
- have some practice trials and the main task where participants have to choose if the face is neutral (N) or happy (Y) and if it's any of the missing persons then they have to press 'Q'
- then a face recognition task for the missing persons
We were wondering if there's a way to get performance on the first target seen in summary data, first two targets seen, and last two targets seen. So can we record accuracy data from individual missing persons trials and in order. I have attached all of our code to this post. And by "performance" you mean what exactly? Sorry my bad. Performance would be how accurate they were at recognizing the first target, the first two targets, etc... Is that helpful enough? It is not, I'm afraid. In particular, it's not clear to me what you consider "targets", specifically whether that is supposed to include the practice missing person or not. I'll note, too, that it seems like in some conditions ("Practice_outWM"), participants will not be exposed to the practice missing person again, so there cannot be any performace attached to it. Other parts are confusing as well, such as why this "V1" trial presents the "V2" stimulus <trial MissingPerson_Poster V1> / stimulustimes = [0=ready; 500=MissingPerson V2] / trialduration = 15500 / validresponse = () / correctresponse = () / beginresponsetime = 15500 </trial> and this "V2" trial presents the "V1" one: <trial MissingPerson_Poster V2> / stimulustimes = [0=ready; 500=MissingPerson V1] / trialduration = 15500 / validresponse = () / correctresponse = () / beginresponsetime = 15500 </trial> With respect to my question about what you consider "performance," you stated: "Performance would be how accurate they were at recognizing the first target, the first two targets, etc..." But that is ambiguous. You have two parts in the procedure that can be construed as requiring "recognizing the target": (1) the main task where participants have to choose if the face is neutral (N) or happy (Y) and if it's any of the missing persons then they have to press 'Q', and (2) a face recognition task for the missing persons. Are you referring to one, the other, or both? Finally (for now), I'm having trouble parsing "performance on the first target seen in summary data, first two targets seen, and last two targets seen." Would that mean that performance on target 1 is in effect considered twice? Target 1 performance and average (?) of Target 1 + Target 2 performace? And, what, exactly do you even consider the "1st target seen"? Is it supposed to be the 1st stimulus shown during the "MissingPerson_Posters" block, or is the "1st target seen" the stimulus shown by the 1st instance of the "MissingPerson_Vx" trial during the "MainTaskVx" block? I really need you to be way more precise (and please refer to things by the name they appear by in the actual script / code). The targets are not supposed to include the practices, just the ones from the main task (so each of MissingPerson_V1 and MissingPerson_V2). I am not quite sure on your question regarding the "V1" trial presenting the "V2" stimulus (and vice versa) because it's a part of the script that another researcher worked on. I just know that it's supposed to be that way because of some counterbalancing By performance we mean how accurate participants were at recognizing (indicated by them pressing 'Q') the targets on the main task. We are repeating targets just because we aren't sure which we want to look at. We want to know if we can record accuracy for each target by where they were presented (1st presented, 2nd presented, etc.) in the main task. E.g., let's say that participant 1 got to the main task and is now looking at the pictures and trying to judge them by emotion while also being on the lookout for the targets. While doing the task the participant noticed the first target which happens to be "/1 = "043_m_f_h_a (2).jpg" (since they noticed and pressed 'Q' they were accurate for this target), after a few more pictures the second target appears "/3 = "176_o_m_n_a (2).jpg" but they don't notice so their performance on this target is not as good. Then we want to see the performance of participant 2, who happened to get target "/3 = "176_o_m_n_a (2).jpg" first and later on target "/1 = "043_m_f_h_a (2).jpg" . This way we can then check the level of accuracy on the targets depending on the order they were presented on the main task. In other words, are participants only noticing target "/1 = "043_m_f_h_a (2).jpg" when it's presented as the first picture before the other targets or are they fairly accurate at that target regardless of the order
|
|
|
Dave
|
|
Group: Administrators
Posts: 13K,
Visits: 104K
|
+x+x+xIn our study we: - show 5 target missing persons,
- have some practice trials and the main task where participants have to choose if the face is neutral (N) or happy (Y) and if it's any of the missing persons then they have to press 'Q'
- then a face recognition task for the missing persons
We were wondering if there's a way to get performance on the first target seen in summary data, first two targets seen, and last two targets seen. So can we record accuracy data from individual missing persons trials and in order. I have attached all of our code to this post. And by "performance" you mean what exactly? Sorry my bad. Performance would be how accurate they were at recognizing the first target, the first two targets, etc... Is that helpful enough? It is not, I'm afraid. In particular, it's not clear to me what you consider "targets", specifically whether that is supposed to include the practice missing person or not. I'll note, too, that it seems like in some conditions ("Practice_outWM"), participants will not be exposed to the practice missing person again, so there cannot be any performace attached to it. Other parts are confusing as well, such as why this "V1" trial presents the "V2" stimulus <trial MissingPerson_Poster V1> / stimulustimes = [0=ready; 500=MissingPerson V2] / trialduration = 15500 / validresponse = () / correctresponse = () / beginresponsetime = 15500 </trial> and this "V2" trial presents the "V1" one: <trial MissingPerson_Poster V2> / stimulustimes = [0=ready; 500=MissingPerson V1] / trialduration = 15500 / validresponse = () / correctresponse = () / beginresponsetime = 15500 </trial> With respect to my question about what you consider "performance," you stated: "Performance would be how accurate they were at recognizing the first target, the first two targets, etc..." But that is ambiguous. You have two parts in the procedure that can be construed as requiring "recognizing the target": (1) the main task where participants have to choose if the face is neutral (N) or happy (Y) and if it's any of the missing persons then they have to press 'Q', and (2) a face recognition task for the missing persons. Are you referring to one, the other, or both? Finally (for now), I'm having trouble parsing "performance on the first target seen in summary data, first two targets seen, and last two targets seen." Would that mean that performance on target 1 is in effect considered twice? Target 1 performance and average (?) of Target 1 + Target 2 performace? And, what, exactly do you even consider the "1st target seen"? Is it supposed to be the 1st stimulus shown during the "MissingPerson_Posters" block, or is the "1st target seen" the stimulus shown by the 1st instance of the "MissingPerson_Vx" trial during the "MainTaskVx" block? I really need you to be way more precise (and please refer to things by the name they appear by in the actual script / code).
|
|
|
DSaraqini
|
|
Group: Forum Members
Posts: 21,
Visits: 59
|
+x+xIn our study we: - show 5 target missing persons,
- have some practice trials and the main task where participants have to choose if the face is neutral (N) or happy (Y) and if it's any of the missing persons then they have to press 'Q'
- then a face recognition task for the missing persons
We were wondering if there's a way to get performance on the first target seen in summary data, first two targets seen, and last two targets seen. So can we record accuracy data from individual missing persons trials and in order. I have attached all of our code to this post. And by "performance" you mean what exactly? Sorry my bad. Performance would be how accurate they were at recognizing the first target, the first two targets, etc... Is that helpful enough?
|
|
|
Dave
|
|
Group: Administrators
Posts: 13K,
Visits: 104K
|
+xIn our study we: - show 5 target missing persons,
- have some practice trials and the main task where participants have to choose if the face is neutral (N) or happy (Y) and if it's any of the missing persons then they have to press 'Q'
- then a face recognition task for the missing persons
We were wondering if there's a way to get performance on the first target seen in summary data, first two targets seen, and last two targets seen. So can we record accuracy data from individual missing persons trials and in order. I have attached all of our code to this post. And by "performance" you mean what exactly?
|
|
|