Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
SheldongChen ab79f5d538 | 1 year ago | |
---|---|---|
data | 2 years ago | |
datasets | 2 years ago | |
models | 1 year ago | |
modules-pytorch-1.8.1 | 2 years ago | |
paper | 1 year ago | |
pseudo_labels | 1 year ago | |
vat | 1 year ago | |
z_mask | 1 year ago | |
.gitattributes | 2 years ago | |
.gitignore | 2 years ago | |
LICENSE | 2 years ago | |
README.md | 1 year ago | |
data_utils.py | 1 year ago | |
eval-msr-twotrans.py | 1 year ago | |
eval-ntu-twotrans.py | 1 year ago | |
provider.py | 1 year ago | |
scheduler.py | 1 year ago | |
train-msr-baseline.sh | 1 year ago | |
train-msr-twotrans.py | 1 year ago | |
train-msr.py | 1 year ago | |
train-ntu-120-baseline.sh | 1 year ago | |
train-ntu-120-twotrans.py | 1 year ago | |
train-ntu-120.py | 1 year ago | |
train-ntu-baseline.sh | 1 year ago | |
train-ntu-twotrans.py | 1 year ago | |
train-ntu.py | 1 year ago | |
utils.py | 1 year ago |
MAPLE: Masked Pseudo-Labeling autoEncoder for Semi-supervised Point Cloud Action Recognition. (ACM MM 2022 Oral)
Our MAPLE Project is based on FastReID、P4Transformer (2021 version)、3DV、PointNet++
mv modules-pytorch-1.8.1 modules
cd modules
python setup.py install
@functools.lru_cache(5000) need about 160GB RAM for data loading acceleration
@functools.lru_cache(500) need about 16GB RAM for data loading acceleration
The preparation of MSR-Action3D dataset
Download MSR-Action3D from url
move file:
mv Depth.rar ./data/MSRAction3D/
unrar the Depth.rar
file and preprocess the MSRAction3D dataset:
cd ./data/MSRAction3D/
# unrar the zip file
unrar e Depth.rar
# mkdir
mkdir ./point
# preprocess
python preprocess_file.py --input_dir ./Depth --output_dir ./point --num_cpu 8
make them look like this:
MAPLE
├── datasets
├── modules
`── data
│── MSRAction3D
│-- preprocess_file.py
│-- Depth
`-- point
│-- a01_s01_e01_sdepth.npz
│-- a01_s01_e02_sdepth.npz
│-- a01_s01_e03_sdepth.npz
│-- ...
The preparation of NTU RGBD 60/120 dataset
# mkdir
mkdir ./data/ntu/npy_faster
mkdir ./data/ntu/npy_faster/point_reduce_without_sample
# mv
mv nturgbd_depth_masked_s0??.zip ./data/ntu/
# cd
cd ./data/ntu/
# unzip
unzip nturgbd_depth_masked_s0??.zip
# runs in the background for 2~4 hours
bash depth2point4ntu.sh
# After 2~4 hours, check whether the number of files is 114480
ls ./npy_faster/point_reduce_without_sample/ -l | grep "^-" | wc -l
MAPLE
├── datasets
├── modules
`── data
│── ntu
│-- depth2point4ntu.py
│-- depth2point4ntu.sh
│-- nturgb+d_depth_masked
`-- npy_faster
`-- point_reduce_without_sample
│-- S001C001P001R001A001.npy
│-- S001C001P001R001A002.npy
│-- ...
# train baseline of MSR-Action dataset
bash ./train-msr-baseline.sh
# train baseline of NTU-60 dataset
bash ./train-ntu-baseline.sh
# train baseline of NTU-120 dataset
bash ./train-ntu-120-baseline.sh
# train pseudo label of MSR-Action dataset
bash ./pseudo_labels/train_msr_pseudo.sh
# train pseudo label of NTU-60 dataset
bash ./pseudo_labels/train_ntu_pseudo.sh
# train pseudo label of NTU-120 dataset
bash ./pseudo_labels/train_ntu120_pseudo.sh
# use MSR-Action dataset as example, bash file of other datasets is under ./vat
# train VAT + Entmin
bash vat/train_vat_msr_gpu0.sh
# or train VAT + Entmin + resume from pretrained model
bash vat/train_vat_msr_gpu0_resume.sh
# if you need to training for VAT, just remove --vat-EntMin in bash file, such as ./vat/train_vat_ntu.sh
# use MSR-Action dataset as example, bash file of other datasets is under ./z_mask
# train MAPLE
bash ./z_mask/train_mse_msr_gpu0.sh
# use MSR-Action dataset as example, bash file of other datasets is under ./z_mask
# train VAT+Entmin+MAPLE, put the best pretrained VAT+Entmin model of MSR-Action under ./output_msr/entmin/, such as ./output_msr/entmin/model_best_1.pth
bash ./z_mask/train_mse_msr_gpu0_resume_from_entmin_mask.sh
If you use MAPLE in your research or wish to refer to the baseline results published in the Model Zoo, please use the following BibTeX entry.
@inproceedings{chen2022MAPLE,
title={MAPLE: Masked Pseudo-Labeling autoEncoder for Semi-supervised Point Cloud Action Recognition},
author={Xiaodong Chen and Wu Liu and Xinchen Liu and Yongdong Zhang and Jungong Han and Tao Mei},
booktitle={ACM Multimedia (ACM MM)},
year={2022}
}
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》