human-centric982 58599d9701 | 4 years ago | |
---|---|---|
cfgs | 5 years ago | |
evaluate | 5 years ago | |
img | 5 years ago | |
inference | 5 years ago | |
lib | 5 years ago | |
.gitignore | 5 years ago | |
LICENSE | 5 years ago | |
README.md | 4 years ago | |
demo.sh | 5 years ago | |
evaluate.sh | 5 years ago | |
inference.sh | 5 years ago | |
requirement.txt | 5 years ago | |
vis.sh | 5 years ago |
This paper addresses the multi-person pose tracking task that aims to estimate and track person pose keypoints in video. We propose a pose-guided tracking-by-detection framework which fuses pose information into both video human detection and data association procedures. Specifically, we adopt the pose-guided single object tracker to exploit the temporal information for making up missing detections in the video human detection stage. Furthermore, we propose a hierarchical pose-guided graph convolutional networks (PoseGCN) based appearance discriminative model in the data association stage. The GCN-based model exploits the human structural relations to boost the person representation.
Results on Posetrack 2017 Datasets comparing with the other methods, the result on Val is 68.4 and on Test is 60.2, which achieves SOTA.
Create an anaconda environment named PGPT
whose python ==3.7, and activate it
Install pytorch==0.4.0 following official instuction
Clone this repo, and we'll call the directory that you cloned as ${PGPT_ROOT}
Install dependencies
pip install -r requirements.txt
Download the demo dataset and demo_val, and put them into the data
folder in the following manner:
${PGPT_ROOT}
|--data
|--demodata
|--images
|--annotations
|--demo_val.json
Download the PoseGCN model and Tracker model, and put them into the models
folder in the following manner:
${PGPT_ROOT}
|--models
|--pose_gcn.pth.tar
|--tracker.pth
Download the results of detection for demo, and put the results in the results
in the following manner
Right now we don't provide the detection and the pose model which we implement. We implement the module based on the Faster RCNN for detection model and Simple Baseline for pose estimation model. You can clone their repo and train your own detection and pose estimation module.
In order to smoothly run the demo, we provide demo_detection.json
which is the demo results of our detection model. Meanwhile, you can run the demo with your own detection results in the same format as demo_detection.json
.
${PGPT_ROOT}
|--results
|--demo_detection.json
You can run the demo by the following codes:
cd ${PGPT_ROOT}
sh demo.sh
${PGPT_ROOT}/results/demo
${PGPT_ROOT}/results/render
inference/config.py
to suit the path of your own.If you use this code for your research, please consider citing:
@InProceedings{TMM2020-PGPT,
title = {Pose-Guided Tracking-by-Detection: Robust Multi-Person Pose Tracking},
author = {Q. Bao, W. Liu, Y. Cheng, B. Zhou and T. Mei},
booktitle = { IEEE Transactions on Multimedia},
year = {2020}
}
该项目解决多人姿势跟踪问题,该任务旨在估计和跟踪视频中的人体关键点。该项目提出一种姿势指导的检测跟踪框架,该框架将姿势信息融合到视频人体检测和数据关联过程中。具体来说,模型采用姿势引导的单个对象跟踪器来利用时间信息弥补视频人体检测阶段中的缺失检测。此外,在数据关联阶段,提出了一种基于分层姿态指导的图卷积网络(PoseGCN)外观判别模型,基于GCN的模型利用人与人之间的结构关系来增强人体的表征性。
Python C C++ Cuda Cython other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》