Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
shiyutang 2143b79110 | 1 year ago | |
---|---|---|
.. | ||
README.md | 1 year ago | |
abdomen.yml | 1 year ago | |
swinunet_abdomen_224_224_1_14k_5e-2.yml | 1 year ago | |
transunet_abdomen_224_224_1_14k_1e-2.yml | 1 year ago |
Multi-atlas labeling has proven to be an effective paradigm for creating segmentation algorithms from training data. These approaches have been extraordinarily successful for brain and cranial structures (e.g., our prior MICCAI workshops: MLSF’11, MAL’12, SATA’13). After the original challenges closed, the data continue to drive scientific innovation; 144 groups have registered for the 2012 challenge (brain only) and 115 groups for the 2013 challenge (brain/heart/canine leg). However, innovation in application outside of the head and to soft tissues has been more limited. This workshop will provide a snapshot of the current progress in the field through extended discussions and provide researchers an opportunity to characterize their methods on a newly created and released standardized dataset of abdominal anatomy on clinically acquired CT. The datasets will be freely available both during and after the challenge.
Under Institutional Review Board (IRB) supervision, 50 abdomen CT scans of were randomly selected from a combination of an ongoing colorectal cancer chemotherapy trial, and a retrospective ventral hernia study. The 50 scans were captured during portal venous contrast phase with variable volume sizes (512 x 512 x 85 - 512 x 512 x 198) and field of views (approx. 280 x 280 x 280 mm3 - 500 x 500 x 650 mm3). The in-plane resolution varies from 0.54 x 0.54 mm2 to 0.98 x 0.98 mm2, while the slice thickness ranges from 2.5 mm to 5.0 mm. The standard registration data was generated by NiftyReg.
Thirteen abdominal organs were manually labeled by two experienced undergraduate students, and verified by a radiologist on a volumetric basis using the MIPAV software, including:
In the TransUnet model, the labels of the data are reduced to 8, which are:
To preprocess the synapse data, you first need to download RawData.zip from https://www.synapse.org/#!Synapse:syn3193805/files/. Then put it in the MedicalSeg/data/abdomen directory. Then run the tools/prepare_abdomen.py
program,
mkdir data/abdomen
cp path/to/RawData.zip data/abdomen
python tools/prepare_abdomen.py
the dataset will be automatically generated. The file structure is as follows:
abdomen
|--RawData.zip
|--abdomen_raw
│ ├── RawData
│ │ ├──RawData
│ │ │ ├── Training
│ │ │ │ ├── img
│ │ │ │ │ ├── img0001.nii.gz
│ │ │ │ │ └── ...
│ │ │ │ └── ...
│ │ │ │ ├── label
│ │ │ │ │ ├── img0001.nii.gz
│ │ │ │ │ └── ...
│ │ │ │ └── ...
├── abdomen_phase0
│ ├── images
│ │ ├── img0001-0001.npy
│ │ └── ...
│ ├── labels
│ │ ├── label0001-0001.npy
│ │ └── ...
│ ├── train_list.txt
│ └── val_list.txt
In the prepare_abdomen.py
program, the default training set and validation set split ratio is 6:4. If you want to modify the split ratio, you can modify line 113 of the program and pass in the train_split parameter. For example, 8:2, the code is as follows:
self.train_val_split(train_split=0.8)
Then you can start the training program, such as the following command for TransUnet:
python train.py --config configs/synapse/transunet_abdomen_224_224_1_14k_1e-2.yml --do_eval --save_interval 1000 --has_dataset_json False --is_save_data False --num_workers 4 --log_iters 10 --use_vdl
InferenceHelper
.InferenceHelper
is an abstract base class that contains two methods, preprocess
and postprocess
. If you need to add a new inference helper to your own network, you need to customize the class in the medicalseg/inference_helpers
package and inherit the base InferenceHelper
class.class InferenceHelper2D(InferenceHelper):
INFERENCE_HELPERS
variable of type ComponentManager
. You can add your custom inference helper through the add_component
method.@manager.INFERENCE_HELPERS.add_component
class InferenceHelper2D(InferenceHelper):
Also you need to import your class in the __init__.py file of the inference_helper package.
# in medicalseg/inference_helpers/__init__.py file
from .inference_helper_2d import InferenceHelper2D
def preprocess(self, cfg, imgs_path, batch_size, batch_id):
for img in imgs_path[batch_id:batch_id + batch_size]:
im_list = []
imgs = np.load(img)
imgs = imgs[:, np.newaxis, :, :]
for i in range(imgs.shape[0]):
im = imgs[i]
im = cfg.transforms(im)[0]
im_list.append(im)
img = np.concatenate(im_list)
return img
def postprocess(self, results):
results = np.argmax(results, axis=1)
results = results[np.newaxis, :, :, :, :]
return results
# in yml file of configs
export:
inference_helper:
type: InferenceHelper2D
Chen, Jieneng and Lu, Yongyi and Yu, Qihang and Luo, Xiangde and Adeli, Ehsan and Wang, Yan and Lu, Le and Yuille, Alan L., and Zhou, Yuyin. "TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation." arXiv preprint arXiv:2102.04306, 2021.
Backbone | Resolution | lr | Training Iters | Dice | Links |
---|---|---|---|---|---|
R50-ViT-B_16 | 224x224 | 1e-2 | 13950 | 81.05% | model | log | vdl |
Hu Cao, Yueyue Wang, Joy Chen, Dongsheng Jiang, Xiaopeng Zhang, Qi Tian, Manning Wang. "Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation." arXiv preprint arXiv:2105.05537, 2021.
Backbone | Resolution | lr | Training Iters | Dice | Links |
---|---|---|---|---|---|
SwinTransformer-tinier | 224x224 | 5e-2 | 14000 | 82.062% | model | log | vdl |
飞桨高性能图像分割开发套件,端到端完成从训练到部署的全流程图像分割应用。
https://github.com/PaddlePaddle/PaddleSeg
Python Markdown Text Shell Java other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》