Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
skyous e5ffebeaf5 | 1 year ago | |
---|---|---|
pycocotools | 1 year ago | |
src | 1 year ago | |
README.md | 1 year ago | |
cal.py | 1 year ago | |
default_config.yaml | 1 year ago | |
eval_xml.py | 1 year ago | |
train.py | 1 year ago |
YOLOv4作为先进的检测器,它比所有可用的替代检测器更快(FPS)并且更准确(MS COCO AP50 ... 95和AP50)。
本文已经验证了大量的特征,并选择使用这些特征来提高分类和检测的精度。
这些特性可以作为未来研究和开发的最佳实践。
论文:
Bochkovskiy A, Wang C Y, Liao H Y M. YOLOv4: Optimal Speed and Accuracy of Object Detection[J]. arXiv preprint arXiv:2004.10934, 2020.
选择CSPDarknet53主干、SPP附加模块、PANet路径聚合网络和YOLOv4(基于锚点)头作为YOLOv4架构。
YOLOv4需要CSPDarknet53主干来提取图像特征进行检测。 您可以从这里获取到在ImageNet2012上训练的预训练模型。
使用的数据集:COCO2017
支持的数据集:[COCO2017]或与MS COCO格式相同的数据集
支持的标注:[COCO2017]或与MS COCO相同格式的标注
目录结构如下,由用户定义目录和文件的名称:
├── dataset
├── YOLOv4
├── annotations
│ ├─ train.json
│ └─ val.json
├─train
│ ├─picture1.jpg
│ ├─ ...
│ └─picturen.jpg
├─ val
├─picture1.jpg
├─ ...
└─picturen.jpg
本地运行
# training_shape参数用于定义网络图像形状,默认为
[416, 416],
[448, 448],
[480, 480],
[512, 512],
[544, 544],
[576, 576],
[608, 608],
[640, 640],
[672, 672],
[704, 704],
[736, 736].
# 意思是使用11种形状作为输入形状,或者可以设置某种形状。
# 使用python命令执行单尺度训练示例(1卡)
python train.py \
--data_dir=./dataset/xxx \
--pretrained_backbone=cspdarknet53_backbone.ckpt \
--is_distributed=0 \
--lr=0.1 \
--t_max=320 \
--max_epoch=320 \
--warmup_epochs=4 \
--training_shape=416 \
--lr_scheduler=cosine_annealing > log.txt 2>&1 &
# 使用shell脚本执行单尺度单机训练示例(1卡)
bash run_standalone_train.sh dataset/xxx cspdarknet53_backbone.ckpt
# 在Ascend设备上,使用shell脚本执行多尺度分布式训练示例(8卡)
bash run_distribute_train.sh dataset/xxx cspdarknet53_backbone.ckpt rank_table_8p.json
# 使用python命令评估
python eval.py \
--data_dir=./dataset/xxx \
--pretrained=yolov4.ckpt \
--testing_shape=608 > log.txt 2>&1 &
# 使用shell脚本评估
bash run_eval.sh dataset/xxx checkpoint/xxx.ckpt
ModelArts上训练
python eval_xml.py --xml_dir ../data_1220/test \
--jpg_src_path ../data_1220/test \
--predict_result ./predict_result \
--pretrained ./outputs/2022-12-20_time_14_35_45_mosaic_nopre_0.527/ckpt_0/best_map.ckpt
推理结果保存在脚本执行的当前路径,可以在acc.log中看到精度计算结果。
=============coco eval reulst=========
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.646
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.919
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.788
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.549
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.679
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.636
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.304
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.640
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.698
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.624
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.724
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.676
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》