mkda
Differences
This shows you the differences between two versions of the page.
Next revision | Previous revision | ||
mkda [2011/12/08 17:02] – created jochen | mkda [2012/02/28 16:17] (current) – [Algorithm description] jochen | ||
---|---|---|---|
Line 6: | Line 6: | ||
* creation of a compound table containing all found coordinates (possibly matching further selection criteria) | * creation of a compound table containing all found coordinates (possibly matching further selection criteria) | ||
* performing the MKDA (or a similar algorithm, such as found in the [[http:// | * performing the MKDA (or a similar algorithm, such as found in the [[http:// | ||
- | * interpreting the results of the MKDA (making statistical inferences based on areas where the null hypothesis of interest can be safely rejected) | + | * interpreting the results of the MKDA (making statistical inferences based on areas where the null hypothesis of interest can be safely rejected, followed by bringing those inferences into context of the literature and existing models) |
===== Motivation ===== | ===== Motivation ===== | ||
- | For many reasons, | + | For various |
- | * without a clear model that underlies and fits the observed spatial representation (networks subserving the experimentally manipulated function), the results do not represent " | + | * without a clear model that underlies and fits the observed spatial representation (networks subserving the experimentally manipulated function), the results do not represent "accepted |
- | * the choice of subjects, stimuli, experimentation design, etc. could have biased the results to make them less informative for the more general population case (false-positive and false-negative | + | * the choice of subjects, stimuli, experimentation design, etc. could have biased the results to make them less informative for the more general population case (false-positive |
- | * noise components in the data (on all levels) could have masked important aspects (locations, false-negative identification) | + | * noise components in the data (on all levels) could have masked important aspects (locations, |
- | One potential | + | One possible |
However, there are some additional problems that are only partially addressable with meta analyses of any kind, such as: | However, there are some additional problems that are only partially addressable with meta analyses of any kind, such as: | ||
Line 22: | Line 22: | ||
And it must be noted that even meta analyses cannot, per se, create " | And it must be noted that even meta analyses cannot, per se, create " | ||
+ | |||
+ | ===== Practical outline ===== | ||
+ | The following steps, in detail, have to be performed to run an MKDA in NeuroElf: | ||
+ | * creation of a database (tabular format, one row per coordinate, with identifying columns/ | ||
+ | * possibly saving the database in a text-based format (e.g. when using Microsoft Excel for the first step, you should save the database as a CSV file) | ||
+ | * importing the database into NeuroElf (either using the command line or the MKDA UI) | ||
+ | * deciding on settings for the MKDA (e.g. smoothness of underlying indicator maps representing each statistical unit) | ||
+ | * if necessary, configuring one or several contrasts of interest | ||
+ | * running the analysis/ | ||
+ | * thresholding the resulting maps | ||
+ | * drawing inferences | ||
===== Requirements ===== | ===== Requirements ===== | ||
+ | |||
+ | ==== Creation of database ==== | ||
Following the introduction, | Following the introduction, | ||
Line 34: | Line 47: | ||
Lieberman_et_al_2010; | Lieberman_et_al_2010; | ||
- | If you wish to use this table in Tor Wager' | + | If you wish to use this table in Tor Wager' |
<code CSV MKDA_sample_with_fields.txt> | <code CSV MKDA_sample_with_fields.txt> | ||
Line 45: | Line 58: | ||
Lieberman_et_al_2010; | Lieberman_et_al_2010; | ||
+ | This first step can be performed in a variety of programs with Microsoft Excel being very suitable for this task. Usually it would seem most appropriate to first setup the columns (field names), followed by copying and pasting the coordinates into the table and setting all desired columns to their appropriate values. Eventually, the table must be available as a text-based (ASCII) file with a row of field names at the top followed by the actual data, one coordinate per row. | ||
+ | |||
+ | ==== Importing the database into NeuroElf ==== | ||
+ | In case you wish to perform this step on the command line (which might be particularly helpful if an error occurs to pinpoint the problem), you can use the following syntax: | ||
+ | |||
+ | <code matlab importplp.m> | ||
+ | plp = importplpfrommkdadb(' | ||
+ | |||
+ | This will create a [[xff - PLP format|PLP object]] containing the coordinates as well as all other columns in a numeric representation. **Each non-numeric string will be converted to a unique number** such that, for instance, each unique study label will be stored by its numeric index into the '' | ||
+ | |||
+ | To then save the PLP object, please use the following syntax: | ||
+ | |||
+ | <code matlab saveplp.m> | ||
+ | plp.SaveAs(' | ||
+ | % or simply plp.SaveAs;</ | ||
+ | |||
+ | Alternatively, | ||
+ | |||
+ | ===== Running the analysis ===== | ||
+ | For the actual procedure of running the MKDA, please refer to the [[neuroelf_gui - MKDA UI|MKDA UI]] article. | ||
+ | |||
+ | ===== Algorithm description ===== | ||
+ | The general algorithm works as follows: | ||
+ | |||
+ | * potentially, | ||
+ | * for each study or contrast (whatever is used as statistical unit) included in a given analysis a weight is computed (i.e. normally this weight is equal across peaks within a study/ | ||
+ | * next, a voxel based volume is initialized (filled with zeros) for each of the statistical units, and for each point in any given study a blob (with configurable size and value distribution, | ||
+ | * eventually, these volumes are combined using the weights for each of the statistical units (weighted sum along the dimension of the statistical unit, resulting in a 3-dimensional spatial map) | ||
+ | * to draw inferences, an empirical null distribution is derived by either spatially scrambling coordinates within a reasonable mask, such as a grey matter volume in the same space (in which case the null hypothesis tests whether the observed summary statistic of the weighted sum of blobs in any given location is higher than warranted by chance for that particular location if the reported peaks didn't carry any information on the actual spatial location/ | ||
+ | * to allow fMRI-typical inference (uncorrected thresholding, |
mkda.1323363773.txt.gz · Last modified: 2011/12/08 17:02 by jochen