#33 add file

Merged
zzy merged 1 commits from zzy/u-net_Segmentation:master into master 1 year ago
  1. +6
    -0
      SkipDenseNet3D/.idea/vcs.xml
  2. +174
    -0
      SkipDenseNet3D/README.md
  3. +3493
    -0
      SkipDenseNet3D/deploy_3d_denseseg.prototxt
  4. +29484
    -0
      SkipDenseNet3D/log/3D_DenseSeg.log
  5. +319
    -0
      SkipDenseNet3D/make_3D_DenseSeg.py
  6. +119
    -0
      SkipDenseNet3D/plot_trainingloss.py
  7. +124
    -0
      SkipDenseNet3D/prepare_hdf5_cutedge.py
  8. +1
    -0
      SkipDenseNet3D/result_3d_dense_seg.csv
  9. +6
    -0
      SkipDenseNet3D/run_train.sh
  10. +151
    -0
      SkipDenseNet3D/seg_deploy.py
  11. BIN
      SkipDenseNet3D/snapshot/3d_denseseg_iseg_iter_168000.caffemodel
  12. BIN
      SkipDenseNet3D/snapshot/3d_denseseg_iseg_iter_168000.solverstate
  13. BIN
      SkipDenseNet3D/snapshot/3d_denseseg_iseg_iter_200000.caffemodel
  14. BIN
      SkipDenseNet3D/snapshot/3d_denseseg_iseg_iter_200000.solverstate
  15. +16
    -0
      SkipDenseNet3D/solver.prototxt
  16. +3499
    -0
      SkipDenseNet3D/test_3d_denseseg.prototxt
  17. +3499
    -0
      SkipDenseNet3D/train_3d_denseseg.prototxt
  18. +9
    -0
      SkipDenseNet3D/train_list.txt

+ 6
- 0
SkipDenseNet3D/.idea/vcs.xml View File

@@ -0,0 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="VcsDirectoryMappings">
<mapping directory="$PROJECT_DIR$" vcs="Git" />
</component>
</project>

+ 174
- 0
SkipDenseNet3D/README.md View File

@@ -0,0 +1,174 @@
# 3D_DenseSeg: 3D Densely Convolutional Networks for Volumetric Segmentation
By Toan Duc Bui, Jitae Shin, Taesup Moon

This is the implementation of our method in the MICCAI Grand Challenge on 6-month infant brain MRI segmentation-in conjunction with MICCAI 2017.

### Update: (Aug. 14. 2019): We also release the Pytorch version of my journal version at https://github.com/tbuikr/3D-SkipDenseSeg

### Citation
```
@article{bui2019skip,
title={Skip-connected 3D DenseNet for volumetric infant brain MRI segmentation},
author={Bui, Toan Duc and Shin, Jitae and Moon, Taesup},
journal={Biomedical Signal Processing and Control},
volume={54},
pages={101613},
year={2019},
publisher={Elsevier}
}
```

Link journal version: https://www.sciencedirect.com/science/article/pii/S1746809419301946

Link conference version: https://arxiv.org/abs/1709.03199

### Introduction
6-month infant brain MRI segmentation aims to segment the brain into: White matter, Gray matter, and Cerebrospinal fluid. It is a difficult task due to larger overlapping between tissues, low contrast intensity. We treat the problem by using very deep 3D convolution neural network. Our result achieved the top performance in 6 performance metrics.

### Dice Coefficient (DC) for 9th subject
| | CSF | GM | WM | Average
|-------------------|:-------------------:|:---------------------:|:-----:|:--------------:|
|3D-DenseSeg | 94.74% | 91.61% |91.30% | 92.55%

### Citation
```
@article{bui20173d,
title={3D Densely Convolution Networks for Volumetric Segmentation},
author={Bui, Toan Duc and Shin, Jitae and Moon, Taesup},
journal={arXiv preprint arXiv:1709.03199},
year={2017}
}
```

### Requirements:
- 3D-CAFFE (as below), python 2.7, Ubuntu 14.04, CUDNN 5.1, CUDA 8.0
- TiTan X Pascal 12GB

### Installation
- Step 1: Download the source code
```
git clone https://github.com/tbuikr/3D_DenseSeg.git
cd 3D_DenseSeg
```
- Step 2: Download dataset at `http://iseg2017.web.unc.edu/download/` and change the path of the dataset `data_path` and saved path `target_path` in file `prepare_hdf5_cutedge.py`
```
data_path = '/path/to/your/dataset/'
target_path = '/path/to/your/save/hdf5 folder/'
```

- Step 3: Generate hdf5 dataset

```
python prepare_hdf5_cutedge.py
```

- Step 4: Run training

```
./run_train.sh
```

- Step 5: Generate score map and segmentation image. You have to change the path in the file `seg_deploy.py` as
```data_path = '/path/to/your/dataset/'
caffe_root = '/path/to/your/caffe/build/tools/caffe/'# (i.e '/home/toanhoi/caffe/build/tools/caffe/')
```

And run
```
python seg_deploy.py
```

### 3D CAFFE
For CAFFE, we use 3D UNet CAFFE with minor modification. Hence, you first download the 3D UNet CAFFE at

`https://lmb.informatik.uni-freiburg.de/resources/opensource/unet.en.html`

And run the installation as the README file. Then we change the HDF5DataLayer that allows to randomly crop patch based on the code at `https://github.com/yulequan/3D-Caffe`
You can download the code by
```
git clone https://github.com/yulequan/3D-Caffe/
cd 3D-Caffe
git checkout 3D-Caffe
cd ../
```

After downloading both source codes, we have two folder code `3D-Caffe` and `caffe` (for 3D UNet CAFFE). We have to copy the hdf5 data files from `3D-Caffe` to `caffe` by the commands

```
cp ./3D-Caffe/src/caffe/layers/hdf5_data_layer.cpp ./caffe/src/caffe/layers/
cp ./3D-Caffe/src/caffe/layers/hdf5_data_layer.cu ./caffe/src/caffe/layers/
cp ./3D-Caffe/include/caffe/layers/hdf5_data_layer.hpp ./caffe/include/caffe/layers/hdf5_data_layer.hpp
```

Then add these lines in the field `message TransformationParameter` of the file `caffe.proto` in the `./caffe/src/caffe/proto`
(3D UNet CAFFE)
```
optional uint32 crop_size_w = 8 [default = 0];
optional uint32 crop_size_h = 9 [default = 0];
optional uint32 crop_size_l = 10 [default = 0];
```

Add following code in the `./caffe/include/caffe/filler.hpp`

```
/**
3D bilinear filler
*/
template <typename Dtype>
class BilinearFiller_3D : public Filler<Dtype> {
public:
explicit BilinearFiller_3D(const FillerParameter& param)
: Filler<Dtype>(param) {}
virtual void Fill(Blob<Dtype>* blob) {
CHECK_EQ(blob->num_axes(), 5) << "Blob must be 5 dim.";
CHECK_EQ(blob->shape(-1), blob->shape(-2)) << "Filter must be square";
CHECK_EQ(blob->shape(-2), blob->shape(-3)) << "Filter must be square";
Dtype* data = blob->mutable_cpu_data();

int f = ceil(blob->shape(-1) / 2.);
float c = (2 * f - 1 - f % 2) / (2. * f);
for (int i = 0; i < blob->count(); ++i) {
float x = i % blob->shape(-1);
float y = (i / blob->shape(-1)) % blob->shape(-2);
float z = (i/(blob->shape(-1)*blob->shape(-2))) % blob->shape(-3);
data[i] = (1 - fabs(x / f - c)) * (1 - fabs(y / f - c)) * (1-fabs(z / f - c));
}

CHECK_EQ(this->filler_param_.sparse(), -1)
<< "Sparsity not supported by this Filler.";
}
};
```

and in the `GetFiller(const FillerParameter& param)` function (same file)

```
else if (type == "bilinear_3D"){
return new BilinearFiller_3D<Dtype>(param);
}
```

Final, you recompile 3D UNet CAFFE (uncomment `USE_CUDNN := 1`) and can you my prototxt. Please cite these papers when you use the CAFFE code

###
### Note
- If you want to generate network prototxt, you have to change the path of `caffe_root`
```
caffe_root = '/path/to/your/caffe/build/tools/caffe/'# (i.e '/home/toanhoi/caffe/build/tools/caffe/')
```
And run
```
python make_3D_DenseSeg.py
```
- If you have the error `AttributeError: 'LayerParameter' object has no attribute 'shuffle'` when run `python make_3D_DenseSeg.py`, then you can fix it by replacing the line 35 in the `net_spec.py`:
```
#param_names = [s for s in dir(layer) if s.endswith('_param')]
param_names = [f.name for f in layer.DESCRIPTOR.fields if f.name.endswith('_param')]
```
- Plot training loss during training

```
python plot_trainingloss.py ./log/3D_DenseSeg.log
```


+ 3493
- 0
SkipDenseNet3D/deploy_3d_denseseg.prototxt
File diff suppressed because it is too large
View File


+ 29484
- 0
SkipDenseNet3D/log/3D_DenseSeg.log
File diff suppressed because it is too large
View File


+ 319
- 0
SkipDenseNet3D/make_3D_DenseSeg.py View File

@@ -0,0 +1,319 @@
from __future__ import print_function
caffe_root = '/home/toanhoi/caffe/'
import sys
sys.path.insert(0, caffe_root + 'python')
import caffe
import math
from caffe import layers as L
from caffe.proto import caffe_pb2

def bn_relu_conv_bn_relu(bottom, nout, dropout,split):

if split == 'train':
use_global_stats = False
else:
use_global_stats=True

batch_norm1 = L.BatchNorm(bottom, batch_norm_param=dict(use_global_stats=use_global_stats), in_place=False,
param=[dict(lr_mult=0, decay_mult=0), dict(lr_mult=0, decay_mult=0),
dict(lr_mult=0, decay_mult=0)])
scale1 = L.Scale(batch_norm1, bias_term=True, in_place=True,filler=dict(value=1), bias_filler=dict(value=0))
relu1 = L.ReLU(scale1, in_place=True)
conv1 = L.Convolution(relu1, kernel_size=[1, 1, 1], pad=[0, 0, 0], stride=[1,1,1],
param=[dict(lr_mult=1, decay_mult=1)], bias_term=False,
num_output=nout * 4, axis=1, weight_filler=dict(type='msra'),
bias_filler=dict(type='constant'))

batch_norm2 = L.BatchNorm(conv1, batch_norm_param=dict(use_global_stats=use_global_stats), in_place=False,
param=[dict(lr_mult=0, decay_mult=0), dict(lr_mult=0, decay_mult=0),
dict(lr_mult=0, decay_mult=0)])
scale2 = L.Scale(batch_norm2, bias_term=True, in_place=True,filler=dict(value=1), bias_filler=dict(value=0))
relu2 = L.ReLU(scale2, in_place=True)
conv2 = L.Convolution(relu2, param=[dict(lr_mult=1, decay_mult=1)], bias_term=False,
axis=1, num_output=nout, pad=[1, 1, 1], kernel_size=[3, 3, 3], stride=[1,1,1],
weight_filler=dict(type='msra'), bias_filler=dict(type='constant'))

if dropout > 0:
conv2 = L.Dropout(conv2, dropout_ratio=dropout)
return conv2


def add_layer(bottom, num_filter, dropout,split):
conv = bn_relu_conv_bn_relu(bottom, nout=num_filter, dropout=dropout,split=split)
concate = L.Concat(bottom, conv, axis=1)
return concate


def transition(bottom, num_filter, split):

if split == 'train':
use_global_stats = False
else:
use_global_stats=True

batch_norm1 = L.BatchNorm(bottom, batch_norm_param=dict(use_global_stats=use_global_stats), in_place=False,
param=[dict(lr_mult=0, decay_mult=0), dict(lr_mult=0, decay_mult=0),
dict(lr_mult=0, decay_mult=0)])
scale1 = L.Scale(batch_norm1, bias_term=True, in_place=True,filler=dict(value=1), bias_filler=dict(value=0))
relu1 = L.ReLU(scale1, in_place=True)
conv1 = L.Convolution(relu1, param=[dict(lr_mult=1, decay_mult=1)], bias_term=False,
axis=1, num_output=num_filter, pad=[0, 0, 0], kernel_size=[1, 1, 1],stride=[1,1,1],
weight_filler=dict(type='msra'), bias_filler=dict(type='constant'))

batch_norm2 = L.BatchNorm(conv1, batch_norm_param=dict(use_global_stats=use_global_stats), in_place=False,
param=[dict(lr_mult=0, decay_mult=0), dict(lr_mult=0, decay_mult=0),
dict(lr_mult=0, decay_mult=0)])
scale2 = L.Scale(batch_norm2, bias_term=True, in_place=True, filler=dict(value=1), bias_filler=dict(value=0))
relu2 = L.ReLU(scale2, in_place=True)

conv_down = L.Convolution(relu2, param=[dict(lr_mult=1, decay_mult=1)], bias_term=False,
axis=1, num_output=num_filter, pad=[0, 0, 0], kernel_size=[2, 2, 2], stride=2,
weight_filler=dict(type='msra'), bias_filler=dict(type='constant'))


#pooling = L.Pooling(conv1, type="Pooling", pool=P.Pooling.MAX, kernel_size=2, stride=2, engine=1)
return conv_down

# first_output -- #channels before entering the first dense block, set it to be comparable to growth_rate
# growth_rate -- growth rate
# dropout -- set to 0 to disable dropout, non-zero number to set dropout rate
def densenet(split, batch_size=4, first_output=32, growth_rate=16, dropout=0.2):
source_train_path = './train_list.txt'
source_test_path = './test_list.txt'
patch_size = [64, 64, 64]
n = caffe.NetSpec()
num_classes = 4
reduction= 0.5
N=[4,4,4,4]
if split == 'train':
n.data, n.label = L.HDF5Data(name="data", batch_size=batch_size, source=source_train_path, ntop=2, shuffle=True,
transform_param=dict(crop_size_l=patch_size[0], crop_size_h=patch_size[1],
crop_size_w=patch_size[2]), include={'phase': caffe.TRAIN})
elif split == 'val':
n.data, n.label = L.HDF5Data(name="data", batch_size=batch_size, source=source_test_path, ntop=2, shuffle=True,
transform_param=dict(crop_size_l=patch_size[0], crop_size_h=patch_size[1],
crop_size_w=patch_size[2]),
include={'phase': caffe.TEST})
else:
n.data = L.Input(name="data", ntop=1, input_param={'shape': {'dim': [1, 2, patch_size[0], patch_size[1], patch_size[2]]}})

nchannels = first_output

# Fist layers
n.conv1a = L.Convolution(n.data, param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)],
axis=1, num_output=nchannels, pad=[1,1,1], kernel_size=[3, 3, 3], stride=[1,1,1],
weight_filler=dict(type='msra'), bias_filler=dict(type='constant',value=-0.1))

if split == 'train':
use_global_stats = False
else:
use_global_stats=True

n.bnorm1a = L.BatchNorm(n.conv1a, batch_norm_param=dict(use_global_stats=use_global_stats), param=[dict(lr_mult=0, decay_mult=0), dict(lr_mult=0, decay_mult=0),
dict(lr_mult=0, decay_mult=0)], in_place=False)

n.scale1a = L.Scale(n.bnorm1a, in_place=True, bias_term=True,filler=dict(value=1), bias_filler=dict(value=0))
n.relu1a = L.ReLU(n.bnorm1a, in_place=True)


# conv 1b, after BN set bias_term=false
n.conv1b = L.Convolution(n.relu1a, param=[dict(lr_mult=1, decay_mult=1)], bias_term=False,
axis=1, num_output=nchannels, pad=[1, 1, 1], kernel_size=[3, 3, 3], stride=[1,1,1],
weight_filler=dict(type='msra'), bias_filler=dict(type='constant'))

n.bnorm1b = L.BatchNorm(n.conv1b, batch_norm_param=dict(use_global_stats=use_global_stats), param=[dict(lr_mult=0, decay_mult=0), dict(lr_mult=0, decay_mult=0),
dict(lr_mult=0, decay_mult=0)], in_place=False)
n.scale1b = L.Scale(n.bnorm1b, in_place=True, bias_term=True, filler=dict(value=1), bias_filler=dict(value=0))
n.relu1b = L.ReLU(n.bnorm1b, in_place=True)

n.conv1c = L.Convolution(n.relu1b, param=[dict(lr_mult=1, decay_mult=1)], bias_term=False,
axis=1, num_output=nchannels, pad=[1, 1, 1], kernel_size=[3, 3, 3],stride=[1,1,1],
weight_filler=dict(type='msra'), bias_filler=dict(type='constant'))
print (nchannels)

# model = L.Pooling(n.conv1c, type="Pooling", pool=P.Pooling.MAX, kernel_size=2, stride=2, engine=1)

n.bnorm1c = L.BatchNorm(n.conv1c, batch_norm_param=dict(use_global_stats=use_global_stats), param=[dict(lr_mult=0, decay_mult=0), dict(lr_mult=0, decay_mult=0),
dict(lr_mult=0, decay_mult=0)], in_place=False)
n.scale1c = L.Scale(n.bnorm1c, in_place=True, bias_term=True, filler=dict(value=1), bias_filler=dict(value=0))
n.relu1c = L.ReLU(n.bnorm1c, in_place=True)

model = L.Convolution(n.relu1c, param=[dict(lr_mult=1, decay_mult=1)], bias_term=False,
axis=1, num_output=nchannels, pad=[0, 0, 0], kernel_size=[2, 2, 2], stride=2,
weight_filler=dict(type='msra'), bias_filler=dict(type='constant'))
n.__setattr__("Conv_down_1", model)

# ===============Dense block 2=====================
for i in range(N[0]):
if (i == 0):
concat = add_layer(model, growth_rate, dropout,split)
n.__setattr__("Concat_%d" % (i + 1), concat)
nchannels += growth_rate
continue
concat = add_layer(concat, growth_rate, dropout,split)
n.__setattr__("Concat_%d" % (i + 1), concat)
nchannels += growth_rate
# ===============End dense block 2=================
print (nchannels)
# ===============Deconvolution layer 2==============
model_deconv_x2 = L.Deconvolution(concat, param=[dict(lr_mult=0.1, decay_mult=1)],
convolution_param=dict(kernel_size=[4,4,4], stride=[2,2,2], num_output=num_classes,
pad=[1, 1, 1], group=num_classes,
weight_filler=dict(type='bilinear_3D'),
bias_term=False))
n.__setattr__("Deconvolution_%d" % (N[0] + 1), model_deconv_x2)
# ===============End Deconvolution layer 2==============

# ===============Transition layer 2=================
model = transition(concat, int(math.floor(nchannels * reduction)), split)
n.__setattr__("Conv_down_%d" % (N[0] + 1), model)
nchannels = int(math.floor(nchannels * reduction))
# ===============End Transition layer2==============

# ===============Dense block 3=====================
for i in range(N[1]):
if (i == 0):
concat = add_layer(model, growth_rate, dropout, split)
n.__setattr__("Concat_%d" % (N[1] + i + 2), concat)
nchannels += growth_rate
continue
concat = add_layer(concat, growth_rate, dropout, split)
n.__setattr__("Concat_%d" % (N[1] + i + 2), concat)
nchannels += growth_rate
# ===============End dense block 3=================
print (nchannels)
# ===============Deconvolution layer 3==============
model_deconv_x4 = L.Deconvolution(concat, param=[dict(lr_mult=0.1, decay_mult=1)],
convolution_param=dict(kernel_size=[6,6,6], stride=[4,4,4], num_output=num_classes,
pad=[1, 1, 1], group=num_classes,
weight_filler=dict(type='bilinear_3D'),
bias_term=False))
n.__setattr__("Deconvolution_%d" % (N[0] + N[1] + 2), model_deconv_x4)
# ==============Transition layer 3=================
model = transition(concat, int(math.floor(nchannels * reduction)), split)
n.__setattr__("Conv_down_%d" % (N[0] + N[1] + 2), model)
# ===============End Transition layer3==============
nchannels = int(math.floor(nchannels * reduction))

# ===============Dense block 4=====================
for i in range(N[2]):
if (i == 0):
concat = add_layer(model, growth_rate, dropout, split)
n.__setattr__("Concat_%d" % (N[0] + N[1] + i + 3), concat)
nchannels += growth_rate
continue
concat = add_layer(concat, growth_rate, dropout, split)
n.__setattr__("Concat_%d" % (N[0] + N[1] + i + 3), concat)
nchannels += growth_rate
# ===============End dense block 4=================

# ===============Transition layer 4=================
print(nchannels)

# ===============Deconvolution layer 4==============
model_deconv_x8 = L.Deconvolution(concat, param=[dict(lr_mult=0.1, decay_mult=1)],
convolution_param=dict(kernel_size=[10,10,10], stride=[8,8,8], num_output=num_classes,
pad=[1, 1, 1], group=num_classes,
weight_filler=dict(type='bilinear_3D'),
bias_term=False))
n.__setattr__("Deconvolution_%d" % (N[0] + N[1] + N[2] + 3), model_deconv_x8)
# ===============End Deconvolution layer 4==============

# ===============Transition layer 4=================
model = transition(concat, int(math.floor(nchannels * reduction)), split)
n.__setattr__("Conv_down_%d" % (N[0] + N[1] + N[2] + 3), model)
nchannels = int(math.floor(nchannels * reduction))
# ===============End Transition layer3==============

# ===============Dense block 5=====================
for i in range(N[3]):
if (i == 0):
concat = add_layer(model, growth_rate, dropout, split)
n.__setattr__("Concat_%d" % (N[0] + N[1] + N[2] + N[3] + i + 3), concat)
nchannels += growth_rate
continue
concat = add_layer(concat, growth_rate, dropout, split)
n.__setattr__("Concat_%d" % (N[0] + N[1] + N[2] + N[3] + i + 3), concat)
nchannels += growth_rate
# ===============End dense block 5=================
print(nchannels)

# ===============Deconvolution layer 5==============
model_deconv_x16 = L.Deconvolution(concat, param=[dict(lr_mult=0.1, decay_mult=1)],
convolution_param=dict(kernel_size=[18, 18, 18], stride=[16,16,16],
num_output=num_classes,
pad=[1, 1, 1], group=num_classes,
weight_filler=dict(type='bilinear_3D'),
bias_term=False))
n.__setattr__("Deconvolution_%d" % (N[0] + N[1] + N[2] + N[3] + 4), model_deconv_x16)
model = L.Concat(n.conv1c,model_deconv_x2, model_deconv_x4, model_deconv_x8, model_deconv_x16,
axis=1)

n.bnorm_concat= L.BatchNorm(model, batch_norm_param=dict(use_global_stats=use_global_stats), param=[dict(lr_mult=0, decay_mult=0), dict(lr_mult=0, decay_mult=0),
dict(lr_mult=0, decay_mult=0)], in_place=False)
n.scale_concat = L.Scale(n.bnorm_concat, in_place=True, bias_term=True, filler=dict(value=1), bias_filler=dict(value=0))
n.relu_concat = L.ReLU(n.scale_concat, in_place=True)
model_conv_concate = L.Convolution(n.relu_concat,
param=[dict(lr_mult=1, decay_mult=1), dict(lr_mult=2, decay_mult=0)],
axis=1, num_output=num_classes, pad=[0, 0, 0], kernel_size=[1, 1, 1],
weight_filler=dict(type='msra'))

if (split == 'train'):
n.loss = L.SoftmaxWithLoss(model_conv_concate, n.label)

elif (split == 'val'):
n.loss = L.SoftmaxWithLoss(model_conv_concate, n.label)
else:
n.softmax = L.Softmax(model_conv_concate, ntop=1, in_place=False)
return n.to_proto()


def make_net():
with open('train_3d_denseseg.prototxt', 'w') as f:
print(str(densenet('train', batch_size=4)), file=f)

with open('test_3d_denseseg.prototxt', 'w') as f:
print(str(densenet('val', batch_size=4)), file=f)
with open('deploy_3d_denseseg.prototxt', 'w') as f:
print(str(densenet('deploy', batch_size=0)), file=f)

def make_solver():
s = caffe_pb2.SolverParameter()
s.random_seed = 0xCAFFE

s.train_net = 'train_3d_denseseg.prototxt'

s.max_iter = 200000
s.type = 'Adam'
s.display = 20

s.base_lr = 0.0002
#s.power=0.9

s.momentum = 0.97
s.weight_decay = 0.0005
s.average_loss=20
s.iter_size = 1
s.lr_policy='step'
s.stepsize=50000
s.gamma = 0.1
s.snapshot_prefix ='./snapshot/3d_denseseg_iseg'
s.snapshot = 2000
s.solver_mode = caffe_pb2.SolverParameter.GPU

solver_path = 'solver.prototxt'
with open(solver_path, 'w') as f:
f.write(str(s))

if __name__ == '__main__':
make_net()
make_solver()










+ 119
- 0
SkipDenseNet3D/plot_trainingloss.py View File

@@ -0,0 +1,119 @@
import os
import sys
import numpy as np
import matplotlib.pyplot as plt
import math
import pylab
import sys
import argparse
import re
from pylab import figure, show, legend, ylabel

from mpl_toolkits.axes_grid1 import host_subplot

if __name__ == "__main__":
plt.ion()
host = host_subplot(111)
host.set_xlabel("Iterations")
host.set_ylabel("Loss")
plt.subplots_adjust(right=0.75)


while True:
parser = argparse.ArgumentParser(description='makes a plot from Caffe output')
parser.add_argument('output_file', help='file of captured stdout and stderr')
args = parser.parse_args()

f = open(args.output_file, 'r')

training_iterations = []
training_loss = []

test_iterations = []
test_accuracy = []
test_loss = []

check_test = False
check_test2 = False
for line in f:

# if check_test:
# #test_accuracy.append(float(line.strip().split(' = ')[-1]))
# check_test = False
# check_test2 = True
# elif check_test2:
if 'Test net output' in line and 'loss = ' in line:
# print line
#print line.strip().split(' ')
test_loss.append(float(line.strip().split(' ')[-2]))
check_test2 = False
# else:
# test_loss.append(0)
# check_test2 = False

if '] Iteration ' in line and 'loss = ' in line:
arr = re.findall(r'ion \b\d+\b,', line)
training_iterations.append(int(arr[0].strip(',')[4:]))
training_loss.append(float(line.strip().split(' = ')[-1]))

if '] Iteration ' in line and 'Testing net' in line:
arr = re.findall(r'ion \b\d+\b,', line)
test_iterations.append(int(arr[0].strip(',')[4:]))
check_test = True

print 'train iterations len: ', len(training_iterations)
print 'train loss len: ', len(training_loss)
print 'test loss len: ', len(test_loss)
print 'test iterations len: ', len(test_iterations)
#print 'test accuracy len: ', len(test_accuracy)

# if len(test_iterations) != len(test_accuracy): # awaiting test...
# print 'mis-match'
# print len(test_iterations[0:-1])
# test_iterations = test_iterations[0:-1]

f.close()
# plt.plot(training_iterations, training_loss, '-', linewidth=2)
# plt.plot(test_iterations, test_accuracy, '-', linewidth=2)
# plt.show()

# host = host_subplot(111) # , axes_class=AA.Axes)
# plt.subplots_adjust(right=0.75)

#par1 = host.twinx()

# host.set_xlabel("iterations")
# host.set_ylabel("log loss")
#par1.set_ylabel("validation accuracy")

host.clear()
host.clear()
host.set_xlabel("Iterations")
host.set_ylabel("Loss")
#p1, = host.plot(training_iterations, training_loss, label="training loss")
if len(training_iterations) == len(training_loss):
p1, = host.plot(training_iterations, training_loss, label="training loss")
if len(test_iterations) == len(test_loss):
p3, = host.plot(test_iterations, test_loss, label="valdation loss")
#p2, = par1.plot(test_iterations, test_accuracy, label="validation accuracy")

host.legend(loc=2)

#host.axis["left"].label.set_color(p1.get_color())
#par1.axis["right"].label.set_color(p2.get_color())
#fig = plt.figure()
#fig.patch.set_facecolor('white')

#axes = plt.gca()
#ymin, ymax = min(training_loss), max(training_loss)
#axes.set_xlim([xmin, xmax])
#axes.set_ylim([0, ymax])
#plt.yticks([0, 0.2, 0.4, 0.6, 0.8,1.0, 1.2, 1.4, 1.6])
plt.grid()
plt.draw()
plt.show()
plt.pause(5)





+ 124
- 0
SkipDenseNet3D/prepare_hdf5_cutedge.py View File

@@ -0,0 +1,124 @@
from medpy.io import load
import numpy as np
import os
import h5py

#Path to your dataset (img, hdr files)
data_path = '/media/toanhoi/Study/databaseSeg/ISeg/iSeg-2017-Training'
#Saved path
target_path = '/media/toanhoi/Study/databaseSeg/ISeg/hdf5_iseg_data'
#Reference https://github.com/zhengyang-wang/Unet_3D/tree/master/preprocessing
def cut_edge(data, keep_margin):
'''
function that cuts zero edge
'''
D, H, W = data.shape
D_s, D_e = 0, D - 1
H_s, H_e = 0, H - 1
W_s, W_e = 0, W - 1

while D_s < D:
if data[D_s].sum() != 0:
break
D_s += 1
while D_e > D_s:
if data[D_e].sum() != 0:
break
D_e -= 1
while H_s < H:
if data[:, H_s].sum() != 0:
break
H_s += 1
while H_e > H_s:
if data[:, H_e].sum() != 0:
break
H_e -= 1
while W_s < W:
if data[:, :, W_s].sum() != 0:
break
W_s += 1
while W_e > W_s:
if data[:, :, W_e].sum() != 0:
break
W_e -= 1

if keep_margin != 0:
D_s = max(0, D_s - keep_margin)
D_e = min(D - 1, D_e + keep_margin)
H_s = max(0, H_s - keep_margin)
H_e = min(H - 1, H_e + keep_margin)
W_s = max(0, W_s - keep_margin)
W_e = min(W - 1, W_e + keep_margin)

return int(D_s), int(D_e), int(H_s), int(H_e), int(W_s), int(W_e)

def convert_label(label_img):
label_processed=np.zeros(label_img.shape[0:]).astype(np.uint8)
for i in range(label_img.shape[2]):
label_slice=label_img[:, :, i]
label_slice[label_slice == 10] = 1
label_slice[label_slice == 150] = 2
label_slice[label_slice == 250] = 3
label_processed[:, :, i]=label_slice
return label_processed

def build_h5_dataset(data_path, target_path):
'''
Build HDF5 Image Dataset.
'''
for i in range(10):
#Skip subject 9 for validation
if (i==8):
continue
subject_name = 'subject-%d-' % (i + 1)
f_T1 = os.path.join(data_path, subject_name + 'T1.hdr')
img_T1, header_T1 = load(f_T1)
f_T2 = os.path.join(data_path, subject_name + 'T2.hdr')
img_T2, header_T2 = load(f_T2)
f_l = os.path.join(data_path, subject_name + 'label.hdr')
labels, header_label = load(f_l)

inputs_T1 = img_T1.astype(np.float32)
inputs_T2 = img_T2.astype(np.float32)
labels = labels.astype(np.uint8)
labels=convert_label(labels)
mask=labels>0
# Normalization
inputs_T1_norm = (inputs_T1 - inputs_T1[mask].mean()) / inputs_T1[mask].std()
inputs_T2_norm = (inputs_T2 - inputs_T2[mask].mean()) / inputs_T2[mask].std()

# Cut edge
margin = 64/2 # training_patch_size / 2
mask = mask.astype(np.uint8)
min_D_s, max_D_e, min_H_s, max_H_e, min_W_s, max_W_e = cut_edge(mask, margin)
inputs_tmp_T1 = inputs_T1_norm[min_D_s:max_D_e + 1, min_H_s: max_H_e + 1, min_W_s:max_W_e + 1]
inputs_tmp_T2 = inputs_T2_norm[min_D_s:max_D_e + 1, min_H_s: max_H_e + 1, min_W_s:max_W_e + 1]

labels_tmp = labels[min_D_s:max_D_e + 1, min_H_s: max_H_e + 1, min_W_s:max_W_e + 1]

inputs_tmp_T1 = inputs_tmp_T1[:, :, :, None]
inputs_tmp_T2 = inputs_tmp_T2[:, :, :, None]
labels_tmp = labels_tmp[:, :, :, None]

inputs = np.concatenate((inputs_tmp_T1, inputs_tmp_T2), axis=3)

print (inputs.shape, labels_tmp.shape)

inputs_caffe = inputs[None, :, :, :, :]
labels_caffe = labels_tmp[None, :, :, :, :]
inputs_caffe = inputs_caffe.transpose(0, 4, 3, 1, 2)
labels_caffe = labels_caffe.transpose(0, 4, 3, 1, 2)

with h5py.File(os.path.join(target_path, 'train_iseg_norm_cutedge_weight_%s.h5' % (i+1)), 'w') as f:
f['data'] = inputs_caffe # for caffe num channel x d x h x w
f['label'] = labels_caffe

with open('./train_list.txt', 'a') as f:
f.write(os.path.join(target_path, 'train_iseg_norm_cutedge_weight_%s.h5\n' % (i+1)))

if __name__ == '__main__':
if not os.path.exists(target_path):
os.makedirs(target_path)
if os.path.exists("./train_list.txt"):
os.remove("./train_list.txt")
build_h5_dataset(data_path, target_path)

+ 1
- 0
SkipDenseNet3D/result_3d_dense_seg.csv View File

@@ -0,0 +1 @@
0.9474704871500299,0.9161719246456212,0.9131491128045156,0.92559717486672222

+ 6
- 0
SkipDenseNet3D/run_train.sh View File

@@ -0,0 +1,6 @@
#!/usr/bin/env bash
mkdir snapshot
mkdir log
rm ./log/3D_DenseSeg.log
/home/toanhoi/caffe/build/tools/caffe train --solver=solver.prototxt -gpu 0 2>&1 | tee -a ./log/3D_DenseSeg.log


+ 151
- 0
SkipDenseNet3D/seg_deploy.py View File

@@ -0,0 +1,151 @@
import csv
caffe_root = '/home/toanhoi/caffe/build/tools/caffe/'
import sys
import os
sys.path.insert(0, caffe_root + 'python')
os.environ["GLOG_minloglevel"] = "1"
import caffe
import argparse
import numpy as np
from medpy.io import load, save

#0: Background (everything outside the brain)
#10: Cerebrospinal fluid (CSF)
#150: Gray matter (GM)
#250: White matter (WM)
def convert_label_submit(label_img):
label_processed=np.zeros(label_img.shape[0:]).astype(np.uint8)
for i in range(label_img.shape[2]):
label_slice=label_img[:, :, i]
label_slice[label_slice == 1] = 10
label_slice[label_slice == 2] = 150
label_slice[label_slice == 3] = 250
label_processed[:, :, i]=label_slice
return label_processed

def convert_label(label_img):
label_processed=np.zeros(label_img.shape[0:]).astype(np.uint8)
for i in range(label_img.shape[2]):
label_slice=label_img[:, :, i]
label_slice[label_slice == 10] = 1
label_slice[label_slice == 150] = 2
label_slice[label_slice == 250] = 3
label_processed[:, :, i]=label_slice
return label_processed
#Reference https://github.com/ginobilinie/infantSeg
def dice(im1, im2,tid):
im1=im1==tid
im2=im2==tid
im1=np.asarray(im1).astype(np.bool)
im2=np.asarray(im2).astype(np.bool)
if im1.shape != im2.shape:
raise ValueError("Shape mismatch: im1 and im2 must have the same shape.")
# Compute Dice coefficient
intersection = np.logical_and(im1, im2)
dsc=2. * intersection.sum() / (im1.sum() + im2.sum())
return dsc

if __name__ == "__main__":
parser = argparse.ArgumentParser(description='makes a plot from Caffe output')
parser.add_argument("-start")
parser.add_argument("-end")

if (os.environ.get('CAFFE_CPU_MODE')):
caffe.set_mode_cpu()
else:
caffe.set_mode_gpu()

data_path = '/media/toanhoi/Study/databaseSeg/ISeg/iSeg-2017-Training'

subject_id=9
subject_name = 'subject-%d-' % subject_id

f_T1 = os.path.join(data_path, subject_name + 'T1.hdr')
inputs_T1, header_T1 = load(f_T1)
inputs_T1 = inputs_T1.astype(np.float32)

f_T2 = os.path.join(data_path, subject_name + 'T2.hdr')
inputs_T2, header_T2 = load(f_T2)
inputs_T2 = inputs_T2.astype(np.float32)

f_l = os.path.join(data_path, subject_name + 'label.hdr')
labels, header_label = load(f_l)
labels = labels.astype(np.uint8)
labels=convert_label(labels)

mask = inputs_T1 > 0
mask = mask.astype(np.bool)
# ======================normalize to 0 mean and 1 variance====
# Normalization
inputs_T1_norm =(inputs_T1 - inputs_T1[mask].mean()) / inputs_T1[mask].std()
inputs_T2_norm = (inputs_T2 - inputs_T2[mask].mean()) / inputs_T2[mask].std()

inputs_T1_norm = inputs_T1_norm[:, :, :, None]
inputs_T2_norm = inputs_T2_norm[:, :, :, None]

inputs = np.concatenate((inputs_T1_norm, inputs_T2_norm), axis=3)
inputs = inputs[None, :, :, :, :]
inputs = inputs.transpose(0, 4, 3, 1, 2)
num_class=4
num_paches=0

model_def='./deploy_3d_denseseg.prototxt'
model_weights = "./snapshot/3d_denseseg_iseg_iter_200000.caffemodel"
net = caffe.Net(model_def, model_weights,caffe.TEST)
patch_input = [64, 64, 64]
xstep = 16
ystep = 8#16
zstep = 16#16
deep_slices = np.arange(patch_input[0] // 2, inputs.shape[2] - patch_input[0] // 2 + xstep, xstep)
height_slices = np.arange(patch_input[1] // 2, inputs.shape[3] - patch_input[1] // 2 + ystep, ystep)
width_slices = np.arange(patch_input[2] // 2, inputs.shape[4] - patch_input[2] // 2 + zstep, zstep)
output = np.zeros((num_class,) + inputs.shape[2:])
count_used=np.zeros((inputs.shape[2],inputs.shape[3],inputs.shape[4]))+1e-5

total_patch=len(deep_slices)*len(height_slices)*len(width_slices)
for i in range(len(deep_slices)):
for j in range(len(height_slices)):
for k in range(len(width_slices)):
num_paches=num_paches+1
deep = deep_slices[i]
height = height_slices[j]
width = width_slices[k]
raw_patches= inputs[:,:,deep - patch_input[0] // 2:deep + patch_input[0] // 2,
height - patch_input[1] // 2:height + patch_input[1] // 2,
width - patch_input[2] // 2:width + patch_input[2] // 2]
print "Processed: ",num_paches ,"/", total_patch
net.blobs['data'].data[...] = raw_patches
net.forward()

#Major voting https://github.com/ginobilinie/infantSeg
temp_predic=net.blobs['softmax'].data[0].argmax(axis=0)
for labelInd in range(4): # note, start from 0
currLabelMat = np.where(temp_predic == labelInd, 1, 0) # true, vote for 1, otherwise 0
output[labelInd, deep - patch_input[0] // 2:deep + patch_input[0] // 2,
height - patch_input[1] // 2:height + patch_input[1] // 2,
width - patch_input[2] // 2:width + patch_input[2] // 2] += currLabelMat
#Average
# output[slice(None),deep - patch_input[0] // 2:deep + patch_input[0] // 2,
# height - patch_input[1] // 2:height + patch_input[1] // 2,
# width - patch_input[2] // 2:width + patch_input[2] // 2]+=net.blobs['softmax'].data[0]

count_used[deep - patch_input[0] // 2:deep + patch_input[0] // 2,
height - patch_input[1] // 2:height + patch_input[1] // 2,
width - patch_input[2] // 2:width + patch_input[2] // 2]+=1

output=output/count_used
y = np.argmax(output, axis=0)
out_label=y.transpose(1,2,0)
dsc_0 = dice(out_label , labels, 0)
dsc_1 = dice(out_label , labels, 1)
dsc_2 = dice(out_label , labels, 2)
dsc_3 = dice(out_label , labels, 3)
dsc = np.mean([dsc_1, dsc_2, dsc_3]) # ignore Background
print dsc_1, dsc_2, dsc_3, dsc
with open('result_3d_dense_seg.csv', 'a+') as csvfile:
datacsv = csv.writer(csvfile, delimiter=',', quoting=csv.QUOTE_MINIMAL)
datacsv.writerow([dsc_1, dsc_2, dsc_3, dsc])

out_label=out_label.astype(np.uint8)
out_label = convert_label_submit(out_label)
save(out_label, '{}/{}'.format("./", "3d_dense_seg_result.hdr"), header_T1)

BIN
SkipDenseNet3D/snapshot/3d_denseseg_iseg_iter_168000.caffemodel View File


BIN
SkipDenseNet3D/snapshot/3d_denseseg_iseg_iter_168000.solverstate View File


BIN
SkipDenseNet3D/snapshot/3d_denseseg_iseg_iter_200000.caffemodel View File


BIN
SkipDenseNet3D/snapshot/3d_denseseg_iseg_iter_200000.solverstate View File


+ 16
- 0
SkipDenseNet3D/solver.prototxt View File

@@ -0,0 +1,16 @@
train_net: "train_3d_denseseg.prototxt"
base_lr: 0.0002
display: 20
max_iter: 200000
lr_policy: "step"
gamma: 0.1
momentum: 0.97
weight_decay: 0.0005
stepsize: 50000
snapshot: 2000
snapshot_prefix: "./snapshot/3d_denseseg_iseg"
solver_mode: GPU
random_seed: 831486
average_loss: 20
iter_size: 1
type: "Adam"

+ 3499
- 0
SkipDenseNet3D/test_3d_denseseg.prototxt
File diff suppressed because it is too large
View File


+ 3499
- 0
SkipDenseNet3D/train_3d_denseseg.prototxt
File diff suppressed because it is too large
View File


+ 9
- 0
SkipDenseNet3D/train_list.txt View File

@@ -0,0 +1,9 @@
/media/toanhoi/Study/databaseSeg/ISeg/hdf5_iseg_data/train_iseg_norm_cutedge_weight_1.h5
/media/toanhoi/Study/databaseSeg/ISeg/hdf5_iseg_data/train_iseg_norm_cutedge_weight_2.h5
/media/toanhoi/Study/databaseSeg/ISeg/hdf5_iseg_data/train_iseg_norm_cutedge_weight_3.h5
/media/toanhoi/Study/databaseSeg/ISeg/hdf5_iseg_data/train_iseg_norm_cutedge_weight_4.h5
/media/toanhoi/Study/databaseSeg/ISeg/hdf5_iseg_data/train_iseg_norm_cutedge_weight_5.h5
/media/toanhoi/Study/databaseSeg/ISeg/hdf5_iseg_data/train_iseg_norm_cutedge_weight_6.h5
/media/toanhoi/Study/databaseSeg/ISeg/hdf5_iseg_data/train_iseg_norm_cutedge_weight_7.h5
/media/toanhoi/Study/databaseSeg/ISeg/hdf5_iseg_data/train_iseg_norm_cutedge_weight_8.h5
/media/toanhoi/Study/databaseSeg/ISeg/hdf5_iseg_data/train_iseg_norm_cutedge_weight_10.h5

Loading…
Cancel
Save