After data collection, you can download your ANTI-Vea file (CSV format) from “Get your data”. Each row contains information from a single trial of the task and the description of each column can be found here. From this raw dataset, the analysis procedure begins with a preprocessing phase. Here, practice trials are removed and participants with incomplete experimental blocks, task minimizations, and poor performance are identified. Note that the raw data allow the choice of the exclusion thresholds to be made according to the characteristics of each particular study (type of participants, design, resource constraints, etc.). In community adult samples, we recommend excluding participants with incomplete blocks or more than 25% errors in ANTI trials, as per Luna et al. (2021).
Once the data has been processed, the main analysis consists of obtaining the score of the different indexes of the task for each participant. The complex structure and multiple manipulations present in the ANTI-Vea allow a wide variety of indices of attentional functioning to be obtained. The ANTI-Vea core indexes encompass 8 scores of attentional networks (ANTI) and 10 of vigilance.
The ANTI scores include both the mean reaction time (RT) and the percentage of errors for the overall ANTI trials as well as the alerting, orienting, and congruency (executive) effects. In the case of RT in ANTI trials, incorrect trials and RTs below 200 ms or above 1500 ms are filtered out, which complies with Luna et al. (2021). The vigilance scores include both the overall performance indexes and their slope of decrement over the blocks of the task. The measures of executive vigilance (EV) are the percentage of hits and false alarms; whereas the arousal vigilance (AV) scores are the mean RT, the standard deviation (SD) RT, and the percentage of lapses. Note that for false alarms, only difficult ANTI trials (i.e., ANTI trials with more than 2 pixels of random noise from the target to at least one of its two adjacent flankers) are computed. This allows for a decreasing trend of false alarms over the blocks due to the avoidance of a floor effect (Luna et al., 2021).
To support the analysis process of the ANTI-Vea, we have developed a code in R. This code is embedded in a Shiny app, so that even people with no programming knowledge can easily transform their raw data file into a processed file with the score of the different indexes of the task for all their participants. For example, you can use the following sample raw file, which has been downloaded from the ANTI-Vea database when “music” is written in the Experiment Code field. After setting the parameters of the Shiny app (in the case of our sample file, just leaving the default values), the application returns two CSV files: Data Trial (each column represents a task trial) and Data Participant (each column represents a session). It also returns a Technical Report in PDF format where you can see details about the preprocessing, as well as the main tables of the results obtained. For those with some programming knowledge, a simple R script based on the previous example is openly available here. This latter format can be useful for a better understanding of the code and to facilitate modifications in the analysis flow (e.g., different filters, new indexes). The variables/columns obtained in the participant data output are described here.
Beyond the ANTI-Vea core indexes, there are several outcomes of the task that are worthy to consider. In this sense, the conditions that are manipulated to obtain the effects of the three attentional networks and slope of decrement in vigilance can be specifically analyzed for a more detailed analysis (e.g., compare the congruent and incongruent conditions between two groups via a 2x2 mixed ANOVA). Having the conditions separate also allows us to check whether the task manipulation worked correctly, although this can also be checked by a one-sample t-test on the difference scores or slopes from the ANTI-Vea core indexes. Secondly, examples of new indexes that have been or may be derived from the core indices are the slope of cognitive control (Luna et al., 2022), mean and variability of RT in EV trials (Sanchís et al., 2020), scores from the signal detection theory (i.e., sensitivity and response criterion; Luna et al., 2018), sequential effects such as post-error slowing and Gratton (Román-Caballero et al., 2020), scores from the psychometric-curve analysis (i.e.., scale, shift, and lapse rate; Román-Caballero et al., 2022), between-blocks variability of vigilance scores, and scores from the diffusion decision model (i.e., drift rate, boundary separation, starting point, and nondecision component).
We are in process of implementing these extra scores to the R code. Suggestions for new additions to the code are welcome.