dc.description.abstract | FMRI data can help identify areas of the brain that are activated due to external stimuli.
Comparing activated brain regions in participants with and without a disease can help researches understand more about how certain brain activation patterns relate to that disease. To analyze fMRI data, there are steps that need to be taken in order to reduce artifacts and to standardize regions of the brain across subjects in order to do group analysis. In many fMRI data analysis studies, a general linear regression model is fit in order to do hypothesis testing at each voxel. With thousands of voxels, it is necessary to control for multiple comparisons. Random field theory (RFT) and methods controlling the false discovery rate (FDR) are most commonly employed methods. However, applying RFT or FDR on thousands of p-values may inflate the
Type II error rate at each voxel, which hinders scientifically meaningful findings.
In this thesis, traditional approaches to analyzing fMRI data were compared to an
approach using the likelihood paradigm by using multi-subject data. In the likelihood paradigm approach, the family-wise error rate stays small as the number of comparisons increases, an advantage over the traditional approaches. We found that the likelihood paradigm approach was very conservative, as there were no voxels found to be active. This results in false positive and false negative rates all equal to 0. In the simulation, similar results were found. Adding a larger effect size in the simulation did change the results more to what we expect: false negative rates
are not close to 1, and false positive rates hover around 0. In the data analysis and simulation study, similar trends were seen for the RFT and FDR methods. A The small amount of activation in the likelihood paradigm approach can possibly be attributed to our defined alternatives used in the likelihood ratio. | |