Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
human-centric982 d1a4b3e13a | 4 years ago | |
---|---|---|
.. | ||
matlab | 4 years ago | |
py | 4 years ago | |
py-motmetrics | 4 years ago | |
README.md | 4 years ago | |
__init__.py | 4 years ago |
Created by Leonid Pishchulin
This README provides instructions how to evaluate your method's predictions on PoseTrack Dataset locally or using evaluation server.
$ git clone https://github.com/leonid-pishchulin/poseval.git --recursive
$ cd poseval/py && export PYTHONPATH=$PWD/../py-motmetrics:$PYTHONPATH
Evaluation requires ground truth (GT) annotations available at PoseTrack and your method's predictions. Both GT annotations and your predictions must be saved in json format. Following GT annotations, predictions must be stored per sequence using the same structure as GT annotations, and have the same filename as GT annotations. Example of json prediction structure:
{
"annolist": [
{
"image": [
{
"name": "images\/bonn_5sec\/000342_mpii\/00000001.jpg"
}
],
"annorect": [
{
"x1": [625],
"y1": [94],
"x2": [681],
"y2": [178],
"score": [0.9],
"track_id": [0],
"annopoints": [
{
"point": [
{
"id": [0],
"x": [394],
"y": [173],
"score": [0.7],
},
{ ... }
]
}
]
},
{ ... }
],
},
{ ... }
]
}
Note: values of track_id
must integers from the interval [0, 999].
We provide a possibility to convert a Matlab structure into json format.
$ cd poseval/matlab
$ matlab -nodisplay -nodesktop -r "mat2json('/path/to/dir/with/mat/files/'); quit"
This code allows to perform evaluation of per-frame multi-person pose estimation and evaluation of video-based multi-person pose tracking.
Average Precision (AP) metric is used for evaluation of per-frame multi-person pose estimation. Our implementation follows the measure proposed in [1] and requires predicted body poses with body joint detection scores as input. First, multiple body pose predictions are greedily assigned to the ground truth (GT) based on the highest PCKh [3]. Only single pose can be assigned to GT. Unassigned predictions are counted as false positives. Finally, part detection score is used to compute AP for each body part. Mean AP over all body parts is reported as well.
Multiple Object Tracking (MOT) metrics [2] are used for evaluation of video-based pose tracking. Our implementation builds on the MOT evaluation code [4] and requires predicted body poses with tracklet IDs as input. First, for each frame, for each body joint class, distances between predicted locations and GT locations are computed. Then, predicted tracklet IDs and GT tracklet IDs are taken into account and all (prediction, GT) pairs with distances not exceeding PCKh [3] threshold are considered during global matching of predicted tracklets to GT tracklets for each particular body joint. Global matching minimizes the total assignment distance. Finally, Multiple Object Tracker Accuracy (MOTA), Multiple Object Tracker Precision (MOTP), Precision, and Recall metrics are computed. We report MOTA metric for each body joint class and average over all body joints, while for MOTP, Precision, and Recall we report averages only.
Evaluation code has been tested in Linux and Ubuntu OS. Evaluation takes as input path to directory with GT annotations and path to directory with predictions. See "Data preparation" for details on prediction format.
$ git clone https://github.com/leonid-pishchulin/poseval.git --recursive
$ cd poseval/py && export PYTHONPATH=$PWD/../py-motmetrics:$PYTHONPATH
$ python evaluate.py \
--groundTruth=/path/to/annotations/val/ \
--predictions=/path/to/predictions \
--evalPoseTracking \
--evalPoseEstimation
Evaluation of multi-person pose estimation requires joint detection scores, while evaluation of pose tracking requires predicted tracklet IDs per pose.
In order to evaluate using evaluation server, zip your directory containing json prediction files and submit at https://posetrack.net. Shortly you will receive an email containing evaluation results. Prior to submitting your results to evaluation server, make sure you are able to evaluate locally on val set to avoid issues due to incorrect formatting of predictions.
[1] DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation. L. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. Gehler, and B. Schiele. In CVPR'16
[2] Evaluating multiple object tracking performance: the CLEAR MOT metrics. K. Bernardin and R. Stiefelhagen. EURASIP J. Image Vide.'08
[3] 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele. In CVPR'14
[4] https://github.com/cheind/py-motmetrics
For further questions and details, contact PoseTrack Team mailto:admin@posetrack.net
该项目解决多人姿势跟踪问题,该任务旨在估计和跟踪视频中的人体关键点。该项目提出一种姿势指导的检测跟踪框架,该框架将姿势信息融合到视频人体检测和数据关联过程中。具体来说,模型采用姿势引导的单个对象跟踪器来利用时间信息弥补视频人体检测阶段中的缺失检测。此外,在数据关联阶段,提出了一种基于分层姿态指导的图卷积网络(PoseGCN)外观判别模型,基于GCN的模型利用人与人之间的结构关系来增强人体的表征性。
Python C C++ Cuda Cython other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》