求助Diffusion toolkit2.5.2使用方法的使用

What’s new in Diffusion Toolkit 0.6 – track
What’s new in Diffusion Toolkit 0.6 – track
What’s new in Diffusion Toolkit 0.6 – track_merge
March 15th, 2010 at 11:52 pm | by Ruopeng
Diffusion Toolkit 0.6 includes a new command-line tool called track_merge, which merges multiple track files into one so that one can view them in TrackVis at the same time. Usage is quite simple:
track_merge file1 file2 ... fileN output_file
Requirement for this to work is obvious: all the input track files must have the same geometry information in the header.
A nice thing track_merge does behind the scene is when it merges multiple track files, it does not simply add them. An id type property tag is added to each track in the newly merged file, with each unique id representing where the track was originally from. When the merged file is loaded in TrackVis, a property filter will show up in Track Property panel. Users can adjust that to distinguish and sub-group tracks by its id (origin).
发表评论:
TA的最新馆藏[转]&[转]&[转]&[转]&[转]&[转]&Table of Contents
Automated DTI preprocessing with FSL, Diffusion Toolkit, and LONI Pipeline
This page is to collect information on our current DTI preprocessing workflow. We use a combination of FSL tools (for
and bedpostx/probtrackx), as well as Diffusion Toolkit (for ). The preprocessing for both of these routes consists of running the raw diffusion weighted image volumes through a handful of modular unix style command line tools. This makes using the LONI Pipeline very appealing, since it can automate this whole process in a way that is parallelized and reproducible.
Underlying tools
Directory layout
To use these scripts as-is, the following directory structure should be used (but of course the tweaks to adapt this to other layouts are simple). For an expanded listing with the files too, see .
exptDir - The base experiment directory.
PIPELINE - The Pipeline input/output lists get generated in here.
grad - Put the gradient tables in here.
SCRIPTS - After I edit the .pipe and setup script for a particular set of data, I save them in here.
SUBJECTS - Contains all of the individual subject folders
20037 - An example subject folder
1avg, 2avg, etc. - Folders generated with output of Pipeline runs
RAW - Contains the raw 4D DWI series. These are usually just symbolic links to the real files in a centralized repository area that contains all our raw data.
reject - Any rejected files get moved in here with a note.
Here is the layout of folders:
/path/to/exptDir/
|-- ANALYSIS
|-- slicesdir_1avg
`-- slicesdir_2avg
|-- PIPELINE
|-- SCRIPTS
|-- SUBJECTS
|-- diffusion_toolkit
|-- dtifit
`-- reject
|-- diffusion_toolkit
|-- dtifit
|-- diffusion_toolkit
|-- dtifit
Gradient tables (bvecs/bvals)
Wide format
bvals go in a separate file
Must explicitly include entries for all b0s and scan repetitions
i.e. you'll need separate bvecs/bvals files when using 4D files with differing numbers of scan repetitions stacked together.
Diffusion toolkit
Long format
bvals get specified at the command line or GUI
Do not include rows for b0s
Automatically expands the bvecs if it senses multiple scans
MATLAB Pipeline setup script
I like to create Pipeline workflows that use lists for inputs and outputs. This has 2 advantages: 1) Once the .pipe file is created, it can be locked to prevent accidental editing, and 2) A separate script can be used to impose more complex logic on the subjects that get fed into Pipeline. A MATLAB script called make_DTI_lists.m is included for this purpose.
This script is setup as a function to take the following input parameters:
exptDir - Path of the base experiment directory from above.
outName - Desired output directory name to go inside each subject folder (ex: 1avg)
nScans - The number of scans to include in the run. Only subjects with at least this many scans will be included.
idStr - A string that identifies raw DWI series from other files that may be in the RAW folder.
subIDs - A list of subjects to potentially process.
The function can then be called interactively with MATLAB.
Python Pipeline setup CLI
A make_DTI_lists.py Python command line tool is also included. This works the same way as the make_DTI_lists.m setup script, but doesn't rely on MATLAB, and can be executed straight from the UNIX command line.
$ ./make_DTI_lists.py -h
usage: make_DTI_lists.py [-h] [-s SUBIDS [SUBIDS ...]]
exptDir outName nScans idStr
For a given experiment directory, this script checks to see how many DTI scan
repetitions are present, and generates the appropriate input lists for the
DTI_preprocessing.pipe workflow in the &exptDir&/PIPELINE directory.
positional arguments:
Path to base experiment directory. Should contain
SUBJECTS and PIPELINE directories.
Desired output directory name that will be created in
each &exptDir&/SUBJECTS/&subID&/ directory.
Choose the number of scans to average.
Specify a string to identify raw DWI series.
optional arguments:
-h, --help
show this help message and exit
-s SUBIDS [SUBIDS ...]
Subset of subject ID directories in &exptDir&/SUBJECTS
to process. Default is to process all subjects.
Example: make_DTI_lists.py /path/to/exptDir 2avg 2 30DIR -s `ls /path/to/exptDir/SUBJECTS | grep &some pattern goes here&`
Pipeline workflow
For a given subject list, this workflow:
Stacks the repeated scans together with .
Performs affine registration to correct for eddy current induced geometric distortions with .
Performs skull-stripping with .
Sets up a track folder with the appropriate inputs for the
probabilistic tractography tool.
Performs tensor fitting with .
Performs streamline tractography with , including
Tensor fitting with dti_recon.
Fiber tracing with dti_tracker.
Streamline filtering with spline_filter.
Select Window → Variables, and change the exptDir variable to the same path as above.
Make sure that the wrapper scripts point to the right executables by changing /path/to/fsl according to your setup:
$ cat wrappers/dtifit.sh
#! /bin/bash
FSLDIR=&/path/to/fsl&
Make sure the Pipeline modules point to the right wrapper scripts. It would be nice to do this path setup automatically with some setup_paths.sh script. TODO
Inspect/change the other module options (ex: BET thresholds, FA & fiber turning angle limits, etc.)
Make sure properly formatted bvecs/bvals are in the grad folder.
Wrapper scripts
Most FSL tools assume that you've sourced the main FSL environment setup script. Since the pipeline is executed by a different user, this typically won't be true. Therefore, for pipelining FSL tools it's usually a good idea to make a wrapper script that will make sure this gets done.
The slicesdir script in FSL is a nice way to quickly generate a QC report, as it will let you
together files spread out across many directories. For example, if I just ran the Pipeline to do the 2avg processing, I could execute the following from the exptDir:
$ slicesdir SUBJECTS/*/2avg/diffusion_toolkit/dti_fa.nii.gz
Provenance
One really neat thing about putting a preprocessing workflow like this in Pipeline is that it will automatically save a copy of the workflow - including all parameter settings - as a .prov file next to any output files that are generated. This means that there will never be any confusion over which settings were used to generate a given file.
MATLAB batch mode
If you are on a machine that doesn't allow interactive jobs, you'll need to modify make_DTI_lists.m so that it can be executed as a script. To do this,
Remove the function definition at the top.
Add explicit definitions for the variables that would normally be taken in as input arguments.
Execute the script from the Unix command line like matlab -nodisplay & make_DTI_lists.m.
Alternatively, submit this job to the cluster with qsub or the fsl_sub wrapper.
neuroimaging/dti-preprocessing.txt · Last modified:
10:11 am PDT by John Colby
Except where otherwise noted, content on this wiki is licensed under the following license:【图文】trackvis使用手册_百度文库
两大类热门资源免费畅读
续费一年阅读会员,立省24元!
评价文档:
trackvis使用手册
上传于||暂无简介
大小:609.45KB
登录百度文库,专享文档复制特权,财富值每天免费拿!
你可能喜欢后使用快捷导航没有帐号?
查看: 4534|回复: 21
注册时间阅读权限20最后登录在线时间443 小时贡献值0 积分859分享好友记录日志相册UID29735
核心滴友, 积分 859, 距离下一级还需 341 积分
水晶4891 心级859 精华0主题帖子
1、我在使用diffusion toolkit做完处理后,在TrackVis追踪到的全脑纤维素图,为什么我的结果纤维素都是这么短呢??可能是哪块设置问题?
2、我从机器上拿到的DTI数据文件中专门有一个FA数据文件(每个被试),我想问下如果比较两组被试FA值得差异直接就用这些图做统计吗?怎么处理这些图?
附件: 你需要才可以下载或查看附件。没有帐号?
[]: 看到滴油faust在心情记录中发牢骚,好心的拿出 水晶 3
注册时间阅读权限100最后登录在线时间1173 小时贡献值0 积分672分享好友记录日志相册UID17703
水晶4045 心级672 精华0主题帖子
确实追踪的不好,但是具体怎么解决我也不熟悉。你的参数大概是什么呢?
祝滴友们马年新春快乐!
注册时间阅读权限20最后登录在线时间443 小时贡献值0 积分859分享好友记录日志相册UID29735
核心滴友, 积分 859, 距离下一级还需 341 积分
水晶4891 心级859 精华0主题帖子
ncu6096 发表于
确实追踪的不好,但是具体怎么解决我也不熟悉。你的参数大概是什么呢?
我使用Diffusion toolkit做的,我就把原始的DTI数据输进去,然后基本上什么也没动,因为是第一次使用这个软件。结果出来就这样了,也不知道什么原因,还请大神指点~~对于我的第二个问题能够给与回答吗?非常感谢!
附件: 你需要才可以下载或查看附件。没有帐号?
注册时间阅读权限100最后登录在线时间1173 小时贡献值0 积分672分享好友记录日志相册UID17703
水晶4045 心级672 精华0主题帖子
<font color="#9057649 发表于
我使用Diffusion toolkit做的,我就把原始的DTI数据输进去,然后基本上什么也没动,因为是第一次使用这个 ...
才6个方向啊?
祝滴友们马年新春快乐!
注册时间阅读权限20最后登录在线时间443 小时贡献值0 积分859分享好友记录日志相册UID29735
核心滴友, 积分 859, 距离下一级还需 341 积分
水晶4891 心级859 精华0主题帖子
不是,这是之前处理的有错,后来改了是30个方向。冒昧的问一句可否加你QQ私聊
注册时间阅读权限100最后登录在线时间1173 小时贡献值0 积分672分享好友记录日志相册UID17703
水晶4045 心级672 精华0主题帖子
第二个问题,FA也可以用上面的那个软件做,还需要进行空间标准化和平滑才能进行两组之间的基于全脑体素的分析(VBA)
[]:ncu6096 想试试炫富的感觉,甩出 4
水晶给脑科学资讯栏目编辑当加班费.
祝滴友们马年新春快乐!
注册时间阅读权限20最后登录在线时间443 小时贡献值0 积分859分享好友记录日志相册UID29735
核心滴友, 积分 859, 距离下一级还需 341 积分
水晶4891 心级859 精华0主题帖子
ncu6096 发表于
第二个问题,FA也可以用上面的那个软件做,还需要进行空间标准化和平滑才能进行两组之间的基于全脑体素的分 ...
也就是说如果我在机器上直接拿到了FA数据的话,就只需要把这些数据进行空间标准化以及平滑后就可以做统计分析了?那些时间校正和头动校正还用做吗?
[]: 不满水滴人物专访栏目对自己偶像滴油的漫不经心,甩出 1
水晶让hcp总版限期整改.
注册时间阅读权限100最后登录在线时间1173 小时贡献值0 积分672分享好友记录日志相册UID17703
水晶4045 心级672 精华0主题帖子
<font color="#9057649 发表于
也就是说如果我在机器上直接拿到了FA数据的话,就只需要把这些数据进行空间标准化以及平滑后就可以做统计 ...
时间校正是BOLD数据,多个时间点的数据。
祝滴友们马年新春快乐!
注册时间阅读权限20最后登录在线时间443 小时贡献值0 积分859分享好友记录日志相册UID29735
核心滴友, 积分 859, 距离下一级还需 341 积分
水晶4891 心级859 精华0主题帖子
ncu6096 发表于
时间校正是BOLD数据,多个时间点的数据。
根据您的经验,我得到上面的追踪结果问题可能出现在哪?可以改写什么参数进行调解下
[]: 不满水滴人物专访栏目对自己偶像滴油的漫不经心,甩出 3
水晶让hcp总版限期整改.
注册时间阅读权限100最后登录在线时间1173 小时贡献值0 积分672分享好友记录日志相册UID17703
水晶4045 心级672 精华0主题帖子
<font color="#9057649 发表于
根据您的经验,我得到上面的追踪结果问题可能出现在哪?可以改写什么参数进行调解下 ...
你那是6个方向的吗?是1.5T的做出来的吗?
祝滴友们马年新春快乐!
Medal No.1
Medal No.1
Medal No.2
Medal No.2
Medal No.3
Medal No.3
Medal No.4
Medal No.4
Medal No.5
Medal No.5
Powered by
Template By
Comsenz Inc.}

我要回帖

更多关于 toolkit使用方法 2013 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信