We present UNet++, a new, more powerful architecture for medical image segmentation. Our architecture is essentially
a deeply-supervised encoder-decoder network where the encoder and decoder sub-networks are connected through a series of nested, dense skip
pathways. The re-designed skip pathways aim at reducing the semantic
gap between the feature maps of the encoder and decoder sub-networks.
We argue that the optimizer would deal with an easier learning task when
the feature maps from the decoder and encoder networks are semantically
similar.
yum install mesa-libGL
pip3 install -r requirements.txt
wget http://www.zlib.net/fossils/zlib-1.2.9.tar.gz
tar xvf zlib-1.2.9.tar.gz
cd zlib-1.2.9/
./configure && make install
python3 setup.py build && cp build/lib.linux*/mmcv/_ext.cpython* mmcv
mkdir -p data/
ln -s ${DRIVE_DATASET_PATH} data/
Download DRIVE from file server or official website DRIVE
python3 tools/convert_datasets/drive.py /path/to/training.zip /path/to/test.zip
The available configs are as follows:
# DRIVE
unet++_r34_40k_drive
### Training on mutil-cards
```shell
bash train_dist.sh <config file> <num_gpus> [training args] # config file can be found in the configs directory
bash train_dist.sh configs/unet++/unet++_r34_40k_drive.py 8
# the dir to save logs and models
work-dir: str = None
# the checkpoint file to load weights from
load-from: str = None
# the checkpoint file to resume from
resume-from: str = None
# whether not to evaluate the checkpoint during training
no-validate: bool = False
# (Deprecated, please use --gpu-id) number of gpus to
# use (only applicable to non-distributed training)
gpus: int = None
# (Deprecated, please use --gpu-id) ids of gpus to use
# (only applicable to non-distributed training)
gpu-ids: int = None
# id of gpu to use (only applicable to non-distributed training)
gpu-id: int = 0
# random seed
seed: int = None
# Whether or not set different seeds for different ranks
diff_seed: bool = False
# whether to set deterministic options for CUDNN backend.
deterministic: bool = False
# --options is deprecated in favor of --cfg_options' and it
# will not be supported in version v0.22.0. Override some
# settings in the used config, the key-value pair in xxx=yyy
# format will be merged into config file. If the value to be
# overwritten is a list, it should be like key="[a,b]" or key=a,b
# It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]"
# Note that the quotation marks are necessary and that no white space
# is allowed.
options: str = None
# override some settings in the used config, the key-value pair
# in xxx=yyy format will be merged into config file. If the value
# to be overwritten is a list, it should be like key="[a,b]" or key=a,b
# It also allows nested list/tuple values, e.g. key="[(a,b),(c,d)]"
# Note that the quotation marks are necessary and that no white
# space is allowed.
cfg-options: str = None
# job launcher
launcher: str = "none"
# local rank
local_rank: int = 0
# distributed backend
dist_backend: str = None
# resume from the latest checkpoint automatically.
auto-resume: bool = False
Method | Crop Size | Lr schd | FPS (BI x 8) | mDice |
---|---|---|---|---|
UNet++ | 64x64 | 40000 | 238.9 | 87.52 |
-Ref: https://mmsegmentation.readthedocs.io/en/latest/dataset_prepare.html#cityscapes
-Ref: https://github.com/open-mmlab/mmsegmentation
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》