Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
1019364238@qq.com 3cb1917916 | 1 year ago | |
---|---|---|
nnunet | 1 year ago | |
nnunet.egg-info | 1 year ago | |
.DS_Store | 1 year ago | |
README.md | 1 year ago | |
README_EN.md | 1 year ago | |
nnUNet_LICENSE | 1 year ago | |
scratch.py | 1 year ago | |
setup.cfg | 1 year ago | |
setup.py | 1 year ago |
用户可以探索集成到 nnUNet 框架中的各种受 UNet 启发的架构的使用,并以一致和透明的方式比较架构变化的性能,网络架构是与单个数据集进行比较之间唯一的独立变量。详情请参阅原文:
nnUNet是由Isensee等人开发的,通过阅读以下原始文件可以找到关于原始框架的进一步资料:
Isensee, F., Jaeger, P. F., Kohl, S. A., Petersen, J., & Maier-Hein, K. H. (2020). nnU-Net: a self-configuring method
for deep learning-based biomedical image segmentation. Nature Methods, 1-9.
基于nnUNet主要更改的文件有: experiment_planner_dense3DUNet_v21.py
, experiment_planner_inception3DUNet_v21.py
, experiment_planner_residual3DUNet_v21.py
, experiment_planner_SpatialMultiAttention_3DUNet_v21.py
, experiment_planner_SpatialSingleAttention_3DUNet_v21.py
, experiment_planner_ChannelSpatialAttention_3DUNet_v21.py
, conv_blocks.py
, generic_modular_custom_UNet.py
, generic_modular_UNet.py
, nnUNetTrainerV2_DenseUNet.py
, nnUNetTrainerV2_InceptionUNet.py
, nnUNetTrainerV2_ResidualUNet.py
, nnUNetTrainerV2_GenericSpatialMultiAttentionUNet.py
, nnUNetTrainerV2_GenericSpatialSingleAttentionUNet.py
, nnUNetTrainerV2_GenericChannelSpatialAttentionUNet.py
。
安装之前,确保已经克隆Advanced_nnUNet
仓库并且保持pytorch
为最新版本。
git clone https://github.com/niccolo246/Advanced_nnUNet
cd Advanced_nnUNet
pip install -e .
确保数据集格式符合nnUNet
格式要求,详细具体参考original nnUNet page。环境变量具体参考here,当然您也可以通过nnunet/paths.py
进行修改。
对数据进行预处理,不如nozerocrop
,归一化以及格式转换,生成的数据放在/home/data2/user/nnUNet_preprocessed/Task255_prostate
,同时生成对应的plan.pkl文件,pkl文件储存了训练过程中的超参数。
在nnUNetTrainerV2_InceptionUNet
中可以看到plan.pkl
文件的使用方法:
stage_plans = self.plans['plans_per_stage'][self.stage] # check if this needs to be changed
conv_kernel_sizes = stage_plans['conv_kernel_sizes'] # check if this need to be changed
blocks_per_stage_encoder = stage_plans['num_blocks_encoder'] # check if this needs to be changed
blocks_per_stage_decoder = stage_plans['num_blocks_decoder'] # check if this need to be changed and what to
pool_op_kernel_sizes = stage_plans['pool_op_kernel_sizes']
To commence experiment planning perform following steps:
nnUNet_plan_and_preprocess -t TASK_ID
nnUNet_plan_and_preprocess -t TASK_ID -pl3d ExperimentPlanner3DResidualUNet_v21
nnUNet_plan_and_preprocess -t TASK_ID -pl3d ExperimentPlanner3DInceptionUNet_v21
nnUNet_plan_and_preprocess -t TASK_ID -pl3d ExperimentPlanner3DDenseUNet_v21
nnUNet_plan_and_preprocess -t TASK_ID -pl3d ExperimentPlanner3DSpatialSingleAttentionUNet_v21
nnUNet_plan_and_preprocess -t TASK_ID -pl3d ExperimentPlanner3DSpatialMultiAttentionUNet_v21
nnUNet_plan_and_preprocess -t TASK_ID -pl3d ExperimentPlanner3DChannelSpatialAttentionUNet_v21
一下所有网络训练均基于3d_fullres网络架构进行改进和训练。
Run the following depending on which architecture one wishes to experiment training with:
For FOLD in [0, 1, 2, 3, 4], run:
nnUNet_train 3d_fullres nnUNetTrainerV2_ResidualUNet TASK_NAME_OR_ID FOLD -p nnUNetPlans_ResidualUNet_v2.1
For FOLD in [0, 1, 2, 3, 4], run:
nnUNet_train 3d_fullres nnUNetTrainerV2_InceptionUNet TASK_NAME_OR_ID FOLD -p nnUNetPlans_InceptionUNet_v2.1
For FOLD in [0, 1, 2, 3, 4], run:
nnUNet_train 3d_fullres nnUNetTrainerV2_DenseUNet TASK_NAME_OR_ID FOLD -p nnUNetPlans_DenseUNet_v2.1
For FOLD in [0, 1, 2, 3, 4], run:
nnUNet_train 3d_fullres nnUNetTrainerV2_SpatialSingleAttentionUNet TASK_NAME_OR_ID FOLD -p nnUNetPlans_SpatialSingleAttentionUNet_v2.1
For FOLD in [0, 1, 2, 3, 4], run:
nnUNet_train 3d_fullres nnUNetTrainerV2_SpatialMultiAttentionUNet TASK_NAME_OR_ID FOLD -p nnUNetPlans_SpatialMultiAttentionUNet_v2.1
For FOLD in [0, 1, 2, 3, 4], run:
nnUNet_train 3d_fullres nnUNetTrainerV2_ChannelSpatialAttentionUNet TASK_NAME_OR_ID FOLD -p nnUNetPlans_ChannelSpatialAttentionUNet_v2.1
Note: as discussed in the original nnUNet repository, one does not have to run training on all folds for inference to run (running full training on one fold only is sufficient).
We here concentrate on inference demonstrations using the 3D full-resolution configuration for the UNet architecture variant.
Run the following depending on which architecture one wishes to experiment inference with:
nnUNet_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -t TASK_NAME_OR_ID -m 3d_fullres -p nnUNetPlans_ResidualUNet_v2.1 -tr nnUNetTrainerV2_ResidualUNet
nnUNet_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -t TASK_NAME_OR_ID -m 3d_fullres -p nnUNetPlans_InceptionUNet_v2.1 -tr nnUNetTrainerV2_InceptionUNet
nnUNet_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -t TASK_NAME_OR_ID -m 3d_fullres -p nnUNetPlans_DenseUNet_v2.1 -tr nnUNetTrainerV2_DenseUNet
nnUNet_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -t TASK_NAME_OR_ID -m 3d_fullres -p nnUNetPlans_SpatialSingleAttentionUNet_v2.1 -tr nnUNetTrainerV2_SpatialSingleAttentionUNet
nnUNet_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -t TASK_NAME_OR_ID -m 3d_fullres -p nnUNetPlans_SpatialMultiAttentionUNet_v2.1 -tr nnUNetTrainerV2_SpatialMultiAttentionUNet
nnUNet_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -t TASK_NAME_OR_ID -m 3d_fullres -p nnUNetPlans_ChannelSpatialAttentionUNet_v2.1 -tr nnUNetTrainerV2_ChannelSpatialAttentionUNet
Note: For information on network ensembling refer to original nnUNet repository.
conv_blocks.py
中,写好每一个stage网络代码;nnunet\network_architecture\generic_modular_custom_UNet.py
中新建自定义的Encoder
和Decoder
,同时创建网络class;nnunet\training\network_training\nnUNet_variants\architectural_variants\nnUNetTrainerV2_yourUNet.py
;nnunet\experiment_planning\alternative_experiment_planning\experiment_planner_inception3DUNet_v21.py
,用于生成plan.pkl文件,在训练前使用。No Description
Python Text Pickle Markdown Jupyter Notebook other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》