Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
|
2 years ago | |
---|---|---|
scripts | 2 years ago | |
src | 2 years ago | |
README.md | 2 years ago | |
eval.py | 2 years ago | |
export.py | 2 years ago | |
train.py | 2 years ago |
MCNN was a Multi-column Convolution Neural Network which can estimate crowd number accurately in a single image from almost any perspective.
Paper: Yingying Zhang, Desen Zhou, Siqin Chen, Shenghua Gao, Yi Ma. Single-Image Crowd Counting via Multi-Column Convolutional Neural Network.
MCNN contains three parallel CNNs whose filters are with local receptive fields of different sizes. For simplification, we use the same network structures for all columns (i.e.,conv–pooling–conv–pooling) except for the sizes and numbers of filters. Max pooling is applied for each 2×2 region, and Rectified linear unit (ReLU) is adopted as the activation function because of its good performance for CNNs.
Note that you can run the scripts based on the dataset mentioned in original paper or widely used in relevant domain/network architecture. In the following sections, we will introduce how to run the scripts using the related dataset below.
Dataset used: ShanghaitechA
├─data
├─formatted_trainval
├─shanghaitech_part_A_patches_9
├─train
├─train-den
├─val
├─val-den
├─original
├─shanghaitech
├─part_A_final
├─train_data
├─images
├─ground_truth
├─test_data
├─images
├─ground_truth
├─ground_truth_csv
After installing MindSpore via the official website, you can start training and evaluation as follows:
# enter script dir, train AlexNet
sh run_standalone_train_ascend.sh [DATA_PATH] [CKPT_SAVE_PATH]
# enter script dir, evaluate AlexNet
sh run_standalone_eval_ascend.sh [DATA_PATH] [CKPT_NAME]
├── cv
├── MCNN
├── README.md // descriptions about MCNN
├── scripts
│ ├──run_distribute_train.sh // train in distribute
│ ├──run_eval.sh // eval in ascend
│ ├──run_standalone_train.sh // train in standalone
├── src
│ ├──dataset.py // creating dataset
│ ├──mcnn.py // mcnn architecture
│ ├──config.py // parameter configuration
│ ├──data_loader.py // prepare dataset loader(GREY)
│ ├──data_loader_3channel.py // prepare dataset loader(RGB)
│ ├──evaluate_model.py // evaluate model
│ ├──generator_lr.py // generator learning rate
│ ├──Mcnn_Callback.py // Mcnn Callback
├── train.py // training script
├── eval.py // evaluation script
├── export.py // export script
Major parameters in train.py and config.py as follows:
--data_path: The absolute full path to the train and evaluation datasets.
--epoch_size: Total training epochs.
--batch_size: Training batch size.
--device_target: Device where the code will be implemented. Optional values are "Ascend", "GPU".
--ckpt_path: The absolute full path to the checkpoint file saved after training.
--train_path: Training dataset's data
--train_gt_path: Training dataset's label
--val_path: Testing dataset's data
--val_gt_path: Testing dataset's label
running on Ascend
# python train.py
# or enter script dir, and run the distribute script
sh run_distribute_train.sh
# or enter script dir, and run the standalone script
sh run_standalone_train.sh
After training, the loss value will be achieved as follows:
# grep "loss is " log
epoch: 1 step: 305, loss is 0.00041025918
epoch: 2 step: 305, loss is 3.7117527e-05
...
epoch: 798 step: 305, loss is 0.000332611
epoch: 799 step: 305, loss is 2.6959011e-05
epoch: 800 step: 305, loss is 5.6599742e-06
...
The model checkpoint will be saved in the current directory.
Before running the command below, please check the checkpoint path used for evaluation.
running on Ascend
# python eval.py
# or enter script dir, and run the script
sh run_eval.sh
You can view the results through the file "eval_log". The accuracy of the test dataset will be as follows:
# grep "MAE: " eval_log
MAE: 105.87984801910736 MSE: 161.6687899899305
Parameters | Ascend |
---|---|
Resource | Ascend 910; CPU 2.60GHz, 192cores; Memory, 755G |
uploaded Date | 03/29/2021 (month/day/year) |
MindSpore Version | 1.1.0 |
Dataset | ShanghaitechA |
Training Parameters | steps=2439, batch_size = 1 |
Optimizer | Momentum |
outputs | probability |
Speed | 5.79 ms/step |
Total time | 23 mins |
Checkpoint for Fine tuning | 500.94KB (.ckpt file) |
Scripts | https://gitee.com/mindspore/mindspore/tree/master/model_zoo/official/cv/MCNN |
In dataset.py, we set the seed inside create_dataset
function.
Please check the official homepage.