Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
zhaohaoyu ed1e3089db | 11 months ago | |
---|---|---|
configs | 11 months ago | |
scripts | 11 months ago | |
src | 11 months ago | |
.gitignore | 1 year ago | |
README.md | 1 year ago | |
eval.py | 11 months ago | |
export.py | 11 months ago | |
requirements.txt | 1 year ago | |
train.py | 11 months ago | |
train_ascend_qizhi.py | 1 year ago |
ReID Strong Baseline proposes a novel neck structure named as batch normalization neck (BNNeck).
BNNeck adds a batch normalization layer after global pooling layer to separate metric and classification
losses into two different feature spaces because we observe they are inconsistent in one embedding space.
Extended experiments show that BNNeck can boost the baseline.
Paper: Luo H., Jiang W., et al. “A Strong Baseline and Batch Normalization Neck for Deep Person Re-identification”. CVPRW2019, Oral.
Model uses ResNet50 as backbone. BNNeck adds a BN layer after features and before classifier FC layers.
The BN and FC layers are initialized through Kaiming initialization.
Market1501 dataset is used to train and test model.
Market1501 contains 32,668 images of 1,501 labeled persons of six camera views.
There are 751 identities in the training set and 750 identities in the testing set.
In the original study on this proposed dataset, the author also uses mAP as the evaluation criteria to test the algorithms.
Data structure:
Market-1501-v15.09.15
├── bounding_box_test [19733 entries]
├── bounding_box_train [12937 entries]
├── gt_bbox [25260 entries]
├── gt_query [6736 entries]
├── query [3369 entries]
└── readme.txt
Model uses pre-trained backbone ResNet50 trained on ImageNet2012. Link
After dataset preparation, you can start training and evaluation as follows:
(Note that you must specify dataset path in configs/market1501_config.yml
)
# run training example
bash scripts/run_standalone_train_gpu.sh ./configs/market1501_config.yml 0 /path/to/dataset/ /path/to/output/ /path/to/pretrained_resnet50.pth
# run evaluation example
bash scripts/run_eval_gpu.sh ./configs/market1501_config.yml /path/checkpoint_file /path/to/dataset/
AGW
├── README.md
├── configs
│ ├── dukemtmc_config.yml
│ └── market1501_config.yml
├── eval.py
├── export.py
├── requirements.txt
├── scripts
│ ├── run_distribute_train_ascend.sh
│ ├── run_eval_ascend.sh
│ ├── run_eval_gpu.sh
│ ├── run_standalone_train_ascend.sh
│ └── run_standalone_train_gpu.sh
├── src
│ ├── __init__.py
│ ├── callbacks.py
│ ├── center_loss.py
│ ├── dataset.py
│ ├── datasets
│ │ ├── __init__.py
│ │ ├── bases.py
│ │ ├── dukemtmcreid.py
│ │ └── market1501.py
│ ├── loss.py
│ ├── lr_schedule.py
│ ├── metric_utils.py
│ ├── model
│ │ ├── __init__.py
│ │ ├── agw.py
│ │ ├── cell_wrapper.py
│ │ ├── resnet_nl.py
│ │ └── vit.py
│ ├── model_utils
│ │ ├── __init__.py
│ │ ├── config.py
│ │ ├── device_adapter.py
│ │ ├── local_adapter.py
│ │ └── moxing_adapter.py
│ ├── sampler.py
│ └── triplet_loss.py
├── train.py
└── train_ascend_qizhi.p
usage: train.py --config_path CONFIG_PATH [--device_target DEVICE]
configs/market1501_config.yaml
or configs/dukemtmc_config.yaml
,Run run_distribute_train_Ascend.sh for distributed training of ReID Strong Baseline model.
The RANK_TABLE_FILE
is placed under scripts/
bash scripts/run_distribute_train_Ascend.sh CONFIG_PATH DATA_DIR OUTPUT_PATH PRETRAINED_RESNET50 RANK_TABLE_FILE RANK_SIZE
Run run_standalone_train_gpu.sh
for non-distributed training of model.
bash scripts/run_standalone_train_gpu.sh CONFIG_PATH DEVICE_ID DATA_DIR OUTPUT_PATH PRETRAINED_RESNET50
market1501_config.yaml
.Run bash scripts/run_eval_Ascend.sh for evaluation of ReID Strong Baseline model.
bash scripts/run_eval_Ascend.sh CONFIG_PATH CKPT_PATH DATA_DIR
Run bash scripts/run_eval_gpu.sh
for evaluation of ReID Strong Baseline model.
bash scripts/run_eval_gpu.sh CONFIG_PATH CKPT_PATH DATA_DIR
python export.py --config_path [CONFIG_PATH] --ckpt_file [CKPT_PATH] --file_name [FILE_NAME] --file_format [FILE_FORMAT]
options:
--config_path path to .yml config file
--ckpt_file checkpoint file
--file_name output file name
--file_format output file format, choices in ['MINDIR']
The ckpt_file and config_path parameters are required,
FILE_FORMAT
should be in "MINDIR"
Inference result will be shown in the terminal
Parameters | Ascend |
---|---|
Resource | 1x Ascend 910 |
uploaded Date | 2023/04/23 |
MindSpore Version | 1.8.1 |
Dataset | Market-1501 |
Training Parameters | max_epoch=120, ids_per_batch=16, ims_per_id=4, metric='euclidean' |
Optimizer | Adam, SGD |
Loss Function | Smooth Identity, WRT, Center |
Speed | 444ms/step |
Loss | 0.23 |
Checkpoint for inference | 400.58MB (.ckpt file) |
Scripts | scripts |
Parameters | Ascend |
---|---|
Resource | 4x Ascend 910 |
uploaded Date | 2023/04/23 |
MindSpore Version | 1.8.1 |
Dataset | Market-1501 |
Training Parameters | max_epoch=120, ids_per_batch=16, ims_per_id=4, metric='euclidean', distributed=1 |
Optimizer | Adam, SGD |
Loss Function | Smooth Identity, WRT, Center |
Speed | 444ms/step |
Loss | 0.23 |
Checkpoint for inference | 400.52MB (.ckpt file) |
Scripts | scripts |
Parameters | Ascend |
---|---|
Resource | 1x Ascend 910 |
Uploaded Date | 2023/04/24 |
MindSpore Version | 1.8.1 |
Dataset | Market-1501 |
batch_size | 32 |
outputs | mAP, Rank-1, mINP |
Accuracy | mAP: 0.8912 mINP:0.6873 rank1: 0.9516 |
Parameters | Ascend |
---|---|
Resource | 4x Ascend 910 |
Uploaded Date | 2023/04/24 |
MindSpore Version | 1.8.1 |
Dataset | Market-1501 |
batch_size | 32 |
outputs | mAP, Rank-1, mINP |
Accuracy | mAP: 0.8673 mINP:0.6336 rank1: 0.9448 |
There are six random situations:
Some seeds have already been set in train.py and sampler.py to avoid the randomness of dataset shuffle and weight initialization.
Please check the official homepage.
No Description
Python Shell
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》