Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
Ren Wenhao e5aa25142d | 2 years ago | |
---|---|---|
.idea | 2 years ago | |
ascend310_infer | 2 years ago | |
scripts | 2 years ago | |
src | 2 years ago | |
eval.py | 2 years ago | |
export.py | 2 years ago | |
postprocess.py | 2 years ago | |
readme.md | 2 years ago | |
requirements.txt | 2 years ago | |
train.py | 2 years ago |
Siamfc proposes a new full convolution twin network as the basic tracking algorithm, which is trained end-to-end on ilsvrc15 target tracking video data set. Our tracker exceeds the real-time requirement in frame rate. Although it is very simple, it achieves the best performance on multiple benchmarks.
paper Luca Bertinetto Jack Valmadre Jo˜ao F. Henriques Andrea Vedaldi Philip H. S. Torr
Department of Engineering Science, University of Oxford
Siamfc first uses full convolution alexnet for feature extraction online and offline, and uses twin network to train the template and background respectively. On line, after getting the box of the first frame, it carries out centrrop, and then loads checkpoint to track the subsequent frames. In order to find the box, it needs to carry out a series of penalties on the score graph, Finally, the final prediction point is obtained by twice trilinear interpolation.
used Dataset :ILSVRC2015-VID
After installing mindspree through the official website, you can follow the following steps to train and evaluate:
Run the python script to preprocess the data set
python src/create_dataset_ILSVRC.py -d data_dir -o output_dir
Run Python script to create LMDB
python src/create_lmdb.py -d data_dir -o output_dir
for example:
data_dir = '/data/VID/ILSVRC_VID_CURATION_train'
output_dir = '/data/VID/ILSVRC_VID_CURATION_train.lmdb'
Remarks:The encrypted pathname is used as the index.Therefore,you cannot change the location of the dataset
after creating it, because you need to find the corresponding image according to the index.
Run the script for training
bash run_standalone_train_ascend.sh [Device_ID] [Dataset_path]
Remarks:For the training set position after preprocessing
more
This example is single card training.
Run the script for evaluation
python eval.py,need got10k toolkit,the dataset is OTB2013(50) or OTB2015(100)
├── SiamFC
├── README.md // Notes on siamfc
├── ascend310_infer // Implementation inference script on ascend310
│ ├──inc //Head file
│ ├──src //Main.cc and utils.cc file
│ ├──build.sh //Build file
│ ├──CMakeLists.txt //Required library files
├── scripts
│ ├──ma-pre-start.sh // Create environment before modelarts training
│ ├──run_standalone_train_ascend.sh // Single card training in ascend
│ ├──run_distribution_ascend.sh // Multi card distributed training in ascend
│ ├──run_infer_310.sh //310infer scripts
├── src
│ ├──alexnet.py // Create dataset
│ ├──config.py // Alexnet architecture
│ ├──custom_transforms.py //Data set processing
│ ├──dataset.py //GeneratorDataset
│ ├──Groupconv.py //Mindpore does not support group convolution at present. This is an alternative
│ ├──lr_generator.py //Dynamic learning rate
│ ├──tracker.py //Trace script
│ ├──utils.py // utils
│ ├──create_dataset_ILSVRC.py // Create dataset
│ ├──create_lmdb.py //Create LMDB
├── train.py // Training script
├── eval.py // Evaluation script
python train.py and config.py The main parameters are as follows:
python train.py --device_id=${DEVICE_ID} --data_path=${DATASET_PATH}
grep "loss is " log
epoch: 1 step: 1, loss is 1.14123213
...
epoch: 1 step: 1536, loss is 0.5234123
epoch: 1 step: 1537, loss is 0.4523326
epoch: 1 step: 1538, loss is 0.6235748
...
Model checkpoints are saved in the current directory.
After training, the loss value is as follows:
grep "loss is " log:
epoch: 30 step: 1, loss is 0.12534634
...
epoch: 30 step: 1560, loss is 0.2364573
epoch: 30 step: 1561, loss is 0.156347
epoch: 30 step: 1561, loss is 0.173423
Check the checkpoint path used for evaluation before running the following command.
python eval.py --device_id=${DEVICE_ID} --model_path=${MODEL_PATH}
The results were as follows:
SiamFC_159_50_6650.ckpt -prec_score:0.777 -succ_score:0.589 _succ_rate:0.754
Check the checkpoint path used for evaluation before running the following command.
Run this reference scripts need two different MINDIR
python export.py --device_id=${DEVICE_ID} --model_path=${MODEL_PATH} --file_name_export1=${SAVE_MODEL_PATH1} --file_name_export2=${SAVE_MODEL_PATH2} --file_name=${FILE_FORMAT} --device_target=${DEVICE_TARGET}
bash run_infer_310.sh [MODEL_PATH1] [MODEL_PATH2] [DATASET_PATH] [CODE_PATH] [DEVICE_TARGET] [DEVICE_ID]
parameter | Ascend |
---|---|
resources | Ascend 910;CPU 2.60GHz, 192core;memory:755G |
Upload date | 2021.5.20 |
mindspore version | mindspore1.2.0 |
training parameter | epoch=50,step=6650,batch_size=8,lr_init=1e-2,lr_endl=1e-5 |
optimizer | SGD optimizer,momentum=0.0,weight_decay=0.0 |
loss function | BCEWithLogits |
training speed | epoch time:285693.557 ms per step time :42.961 ms |
total time | about 5 hours |
Script URL | https://gitee.com/mindspore/models/tree/master/research/cv/SiamFC |
Random number seed | set_seed = 1234 |
parameter | Ascend |
---|---|
Model Version | SiamFC |
Upload date | 2021.11.1 |
mindspore version | mindspore1.3.0 |
Dataset | OTB2013 |
total time | about 5 minutes |
outputs | probability |
Accuracy | prec_score:0.779 -succ_score:0.588 _succ_rate:0.756 |
siamfc提出一种新的全卷积孪生网络作为基本的跟踪算法,这个网络在ILSVRC15的目标跟踪视频数据集上进行端到端的训练。我们的跟踪器在帧率上超过了实时性要求,尽管它非常简单,但在多个benchmark上达到最优的性能。
Python C++ Shell Text other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》