neuroelf_methods
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
neuroelf_methods [2013/02/01 16:42] – created jochen | neuroelf_methods [2013/02/02 00:38] (current) – added cluster table function jochen | ||
---|---|---|---|
Line 2: | Line 2: | ||
Whenever a program is used for data analysis, it is important for the community at large to understand what algorithms were used in the analysis. And while NeuroElf is mostly written to make algorithms accessible (user friendliness aspect), it is equally relevant to ascertain that the methods implemented in any program have been accepted by the scientific community as " | Whenever a program is used for data analysis, it is important for the community at large to understand what algorithms were used in the analysis. And while NeuroElf is mostly written to make algorithms accessible (user friendliness aspect), it is equally relevant to ascertain that the methods implemented in any program have been accepted by the scientific community as " | ||
- | As an example, initially when using the **// | + | As an example, initially when using the **// |
+ | |||
+ | ===== List of methods (overview) ===== | ||
+ | The following list gives an overview on what methods of analysis and parameter estimation are implemented in NeuroElf (as far as they exceed basic operations, such as for example plain averaging across a dimension, or auxiliary functions that are used for string manipulation, | ||
+ | |||
+ | ==== Cluster size threshold estimation (alphasim) ==== | ||
+ | [[alphasim|Cluster size threshold estimation]] is a method that can be used to account for the fact that a regular whole-brain map is made up of multiple (partially) independent tests. One common way is to simply adapt the statistical threshold by dividing the desired false-positive rate (i.e. typically 5 per cent = 0.05) by (an estimate of) the number of independent tests. However, this can be too stringent in some cases where larger swaths of cortex (neurocomputational network nodes) respond to an experimental manipulation below the then required detection threshold. Instead of ensuring significance of results solely by applying a voxel-wise corrected statistical threshold it is possible to estimate **how large clusters are, given the smoothness of the residual, that appear in a given search space at random**. I.e. the alpha-rate (false positives among performed tests) can be estimated by simulating statistical maps of the desired kind and then selecting the appropriate cluster size threshold to ensure that at most 5 per cent of maps (with the residual exhibiting the same smoothness) would show a false positive cluster. The resulting **pair of uncorrected statistical threshold and cluster size threshold together** then correct a whole-brain map to a family-wise-error corrected threshold of desired strength (again usually 0.05). This algorithm is | ||
+ | |||
+ | * implemented in function **'' | ||
+ | * accepts a mask (sub-space specification) | ||
+ | * can be applied to surface statistics (given the mesh vertices and topology, as well as an estimate of the smoothness) | ||
+ | * allows to estimate the cluster size threshold for fully independent components of a conjunction analysis | ||
+ | * as a still experimental feature allows to apply a shift in the Z-distribution to account for shifts in the observed distribution of a statistical map (e.g. by-chance global " | ||
+ | |||
+ | ==== Cluster table generation ==== | ||
+ | [[Cluster table|Cluster tables]] are often presented in publications describing analyses where whole-brain mapping was performed, i.e. the attempt in localizing the spatial nodes within cortex that subserve a specific function. This function is | ||
+ | |||
+ | * implemented in a combination of an M-file, **'' | ||
+ | * whereas the M-file provides a command-line interface with rich options for output formatting, converting coordinates, | ||
+ | * and the C/MEX-file provides the actual clustering of the binary (thresholded and masked) volume into separate spatial nodes | ||
+ | |||
+ | Once a (thresholded) map has been segregated into separate volumes (such that voxels of different clusters do not " | ||
+ | |||
+ | ==== Conjunction analysis (minimum t-statistic) ==== | ||
+ | A [[Conjunction analysis|conjunction analysis]] can be informative when, across the brain, the overlap of two statistical tests is of interest. The most stringent test that can be applied is that of requiring that, in each considered voxel, both tests must be significant at the desired level. This functionality is | ||
+ | |||
+ | * implemented in the function **'' | ||
+ | * implemented in the function **'' | ||
+ | |||
+ | ==== Mediation analysis ==== | ||
+ | [[Mediation analysis]] as a whole can be described as the estimation (and test) of separate path coefficients, | ||
+ | |||
+ | * implemented in function **'' | ||
+ | * options are: a*b product testing via bootstrapping or Sobel test, and robust regression | ||
+ | * supports multi-dimensionaging) data for X, M, and Y | ||
+ | |||
+ | An example would be, on the level of a between-subject effect, that a randomly assigned condition (X, e.g. strategy to apply to stimuli) has an effect on outcome (Y, e.g. appetite to a specific type of stimulus or difference in appetite to two kinds of stimuli) via a specific brain region (or network of regions) that work/s as a mediator/s (Mi, e.g. pre-frontal control regions). For a within-subjects design, a test could be whether, on any given trial, the response in pre-frontal cortex during an instructional cue (strategy stimulus) has an effect on outcome (self-reported craving for depicted food) via another brain region. In that case, either X (which brain regions has an influence on the " | ||
+ | |||
+ | ==== Multi-level kernel density analysis (MKDA / meta analysis) ==== | ||
+ | [[MKDA|Multi-level kernel density analysis]] is trying to determine whether reported "peak coordinates" | ||
+ | |||
+ | * implemented as a method for [[xff - PLP format|PLP objects]], [[plp.MKDA|PLP:: | ||
+ | * available via the [[NeuroElf GUI - MKDA UI|Meta Analysis interface]] | ||
+ | |||
+ | ==== Ordinary least-squares (OLS) regression ==== | ||
+ | [[OLS|Ordinary least-squares (OLS) regression]] is the most generic way of applying the General Linear Model (GLM) so as to estimate " | ||
+ | |||
+ | * the most general implementation is done in the **'' | ||
+ | * to assess the significance of the regression (single beta or computed contrasts), the **'' | ||
+ | * a special implementation is contained in the **'' | ||
+ | |||
+ | An additional small number of function files also perform some flavor of linear regression, but those are not applied to functional imaging data (e.g. the function **'' | ||
+ | |||
+ | ==== Robust regression ==== | ||
+ | [[Robust regression]], | ||
+ | |||
+ | * implemented for a single univariate regression (e.g. a time course, T, regressed on a design matrix, X) in function **'' | ||
+ | * implemented for a common design matrix (X) on mass-univariate data (e.g. fMRI imaging on the first or second level) in function **'' | ||
+ | * is used in the GLM computation routine for first-level data (MDM:: | ||
+ | * implemented for individual design matrices (Xi, where the third dimension is the number of cases) on data (e.g. for a whole-brain robust mediation) in function **'' | ||
+ | * this is used by the NeuroElf GUI's mediation interface when robust regression is selected | ||
+ | * after the regression, to compute t-statistics from the output (beta values and sample weights), the function **'' | ||
+ | * a special case is for when, in a correlation, | ||
+ | * and to compare means between groups, the two simplified functions **'' |
neuroelf_methods.txt · Last modified: 2013/02/02 00:38 by jochen