Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
Zi Jian Yew b1c9ee0290 | 1 year ago | |
---|---|---|
docs | 4 years ago | |
src | 1 year ago | |
.gitignore | 4 years ago | |
LICENSE | 3 years ago | |
Readme.md | 4 years ago | |
requirements.txt | 4 years ago |
This is the project webpage of our CVPR 2020 work. RPM-Net is a deep-learning approach designed for performing rigid partial-partial point cloud registration for objects in a iterative fashion. Our paper can be found on Arxiv (supplementary).
@inproceedings{yew2020-RPMNet,
title={RPM-Net: Robust Point Matching using Learned Features},
author={Yew, Zi Jian and Lee, Gim Hee},
booktitle={Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2020}
}
See the following link for a video demonstration of the results:
See requirements.txt
for required packages. Our source code was developed using Python 3.6 with PyTorch 1.2. However, we do not observe problems running on newer versions available as of time of writing.
Run the relevant commands below. We use the processed ModelNet40 dataset from PointNet for this work, which will be automatically downloaded if necessary. Performance typically saturates by around 500-1000 epochs depending on setting.
mkdir rpmnet && cd rpmnet
git clone git@github.com:yewzijian/RPMNet.git
cd RPMNet/src
python train.py --noise_type crop
For clean and noisy data, we use a smaller batch size of 4 to allow it to train on a 11GB GPU (e.g. Nvidia GTX 1080Ti), but for the rest of the experiments we use a batch size of 8. So, for clean data, replace the last line with:
python train.py --noise_type clean --train_batch_size 4
, and for noisy data:
python train.py --noise_type jitter --train_batch_size 4
The tensorboard summaries and more importantly the checkpoints will be saved in [root]/logs/[datetime]/*
. Note that you need a recent version of tensorboard if you wish to visualize the point clouds (optional).
This script performs inference on the trained model, and computes evaluation metrics.
Note: replace --noise_type
accordingly if not running on partial data.
python eval.py --noise_type crop --resume [path-to-logs/ckpt/model-best.pth]
Alternatively, given transforms saved in a .npy file of shape (B, [n_iter], 3, 4), you can evaluate them using:
python eval.py --noise_type crop --transform_file [path-to-transform-file.npy]
Our pretrained models can be downloaded from here. You should be able to obtain the results shown in the paper by using these checkpoints.
No Description
Python Text
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》