Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
Li Zhang dadd1bfd21 | 1 year ago | |
---|---|---|
docs | 1 year ago | |
figs | 1 year ago | |
projects | 1 year ago | |
tools | 1 year ago | |
LICENSE | 1 year ago | |
README.md | 1 year ago |
PolarFormer: Multi-camera 3D Object Detection
with Polar Transformers,
Yanqin Jiang, Li Zhang, Zhenwei Miao, Xiatian Zhu, Jin Gao, Weiming Hu, Yu-Gang Jiang
AAAI 2023
This repository is an official implementation of PolarFormer.
3D object detection in autonomous driving aims to reason “what” and “where” the objects of interest present in a 3D world. Following the conventional wisdom of previous 2D object detection, existing 3D object detection methods often adopt the canonical Cartesian coordinate system with perpendicular axis. However, we conjugate that this does not fit the nature of the ego car’s perspective, as each onboard camera perceives the world in shape of wedge intrinsic to the imaging geometry with radical (non-perpendicular) axis. Hence, in this paper we advocate the exploitation of the Polar coordinate system and propose a new Polar Transformer (PolarFormer) for more accurate 3D object detection in the bird’s-eye-view (BEV) taking as input only multi-camera 2D images. Specifically, we design a cross-attention based Polar detection head without restriction to the shape of input structure to deal with irregular Polar grids. For tackling the unconstrained object scale variations along Polar’s distance dimension, we further introduce a multiscale Polar representation learning strategy As a result, our model can make best use of the Polar representation rasterized via attending to the corresponding image observation in a sequence-to-sequence fashion subject to the geometric constraints. Thorough experiments on the nuScenes dataset demonstrate that our PolarFormer outperforms significantly state-of-the-art 3D object detection alternatives, as well as yielding competitive performance on BEV semantic segmentation task.
This implementation is build upon detr3d, please follow the steps in install.md to prepare the environment.
Please follow the official instructions of mmdetection3d to process the nuScenes dataset.(https://mmdetection3d.readthedocs.io/en/v0.17.3/datasets/nuscenes_det.html)
After preparation, you will be able to see the following directory structure:
PolarFormer
├── mmdetection3d
├── projects
│ ├── configs
│ ├── mmdet3d_plugin
├── tools
├── data
│ ├── nuscenes
├── ckpts
├── README.md
cd PolarFormer
You can train the model following:
tools/dist_train.sh projects/configs/polarformer/polarformer_r101.py.py 8 --work-dir work_dirs/polarformer_r101/
You can evaluate the model following:
tools/dist_test.sh projects/configs/polarformer/polarformer_r101.py work_dirs/polarformer_r101/latest.pth 8 --eval bbox
model | mAP | NDS |
---|---|---|
PolarFormer, R101_DCN | 41.5 | 47.0 |
PolarFormer-T, R101_DCN | 45.7 | 54.3 |
PolarFormer, V2-99 | 45.5 | 50.3 |
PolarFormer-T, V2-99 | 49.3 | 57.2 |
model | mAP | NDS | config | download |
---|---|---|---|---|
PolarFormer, R101_DCN | 39.6 | 45.8 | config | ckpt |
PolarFormer-w/o_bev_aug, R101_DCN | 39.2 | 46.0 | config | ckpt / log |
PolarFormer-T, R101_DCN | 43.2 | 52.8 | - | - |
PolarFormer, V2-99 | 50.0 | 56.2 | config | ckpt |
Note: We adopt BEV data augmentation(random flipping, scaling and rotation) as the default setting when developing PolarFormer on nuScenes dataset. However, as the ablation in 2nd row indicates, BEV augmentation contributes little to the overall performance of PolarFormer. So please feel free to set "use_bev_aug = False" during training if you want to reduce computational burden.
model | Drivable | Crossing | Walking | Carpark | Divider |
---|---|---|---|---|---|
PolarFormer, efficientnet-b0 | 81.0 | 48.9 | 55.8 | 52.6 | 42.2 |
PolarFormer-T, efficientnet-b0 | 82.6 | 54.3 | 59.4 | 56.7 | 46.2 |
PolarFormer-joint_det_seg, R101_DCN | 82.6 | 50.1 | 57.4 | 54.1 | 44.5 |
@inproceedings{jiang2022polar,
title={PolarFormer: Multi-camera 3D Object Detection with Polar Transformers},
author={Jiang, Yanqin and Zhang, Li and Miao, Zhenwei and Zhu, Xiatian and Gao, Jin and Hu, Weiming and Jiang, Yu-Gang},
booktitle={AAAI},
year={2023}
}
Many thanks to the following open-source projects:
No Description
Python Shell
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》