Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
BigDong 976f399ea2 | 1 year ago | |
---|---|---|
.circleci | 1 year ago | |
.github | 1 year ago | |
docs | 1 year ago | |
examples | 1 year ago | |
mmeval | 1 year ago | |
requirements | 1 year ago | |
tests | 1 year ago | |
.gitignore | 1 year ago | |
.pre-commit-config-zh-cn.yaml | 1 year ago | |
.pre-commit-config.yaml | 1 year ago | |
.readthedocs.yml | 1 year ago | |
CODEOWNERS | 1 year ago | |
CONTRIBUTING.md | 1 year ago | |
CONTRIBUTING_zh-CN.md | 1 year ago | |
LICENSE | 1 year ago | |
MANIFEST.in | 1 year ago | |
README.md | 1 year ago | |
README_zh-CN.md | 1 year ago | |
requirements.txt | 1 year ago | |
setup.cfg | 1 year ago | |
setup.py | 1 year ago |
English | 简体中文
MMEval is a machine learning evaluation library that supports efficient and accurate distributed evaluation on a variety of machine learning frameworks.
Major features:
MPI4Py | torch.distributed | Horovod | paddle.distributed | oneflow.comm |
---|---|---|---|---|
MPI4PyDist | TorchCPUDist TorchCUDADist |
TFHorovodDist | PaddleDist | OneFlowDist |
NOTE: MMEval tested with PyTorch 1.6+, TensorFlow 2.4+, Paddle 2.2+ and OneFlow 0.8+.
Metric | numpy.ndarray | torch.Tensor | tensorflow.Tensor | paddle.Tensor | oneflow.Tensor |
---|---|---|---|---|---|
Accuracy | ✔ | ✔ | ✔ | ✔ | ✔ |
SingleLabelMetric | ✔ | ✔ | ✔ | ||
MultiLabelMetric | ✔ | ✔ | ✔ | ||
AveragePrecision | ✔ | ✔ | ✔ | ||
MeanIoU | ✔ | ✔ | ✔ | ✔ | ✔ |
VOCMeanAP | ✔ | ||||
OIDMeanAP | ✔ | ||||
COCODetection | ✔ | ||||
ProposalRecall | ✔ | ||||
F1Score | ✔ | ✔ | ✔ | ||
HmeanIoU | ✔ | ||||
PCKAccuracy | ✔ | ||||
MpiiPCKAccuracy | ✔ | ||||
JhmdbPCKAccuracy | ✔ | ||||
EndPointError | ✔ | ✔ | ✔ | ||
AVAMeanAP | ✔ | ||||
StructuralSimilarity | ✔ | ||||
SignalNoiseRatio | ✔ | ||||
PeakSignalNoiseRatio | ✔ | ||||
MeanAbsoluteError | ✔ | ||||
MeanSquaredError | ✔ |
MMEval
requires Python 3.6+ and can be installed via pip.
pip install mmeval
To install the dependencies required for all the metrics provided in MMEval
, you can install them with the following command.
pip install 'mmeval[all]'
There are two ways to use MMEval
's metrics, using Accuracy
as an example:
from mmeval import Accuracy
import numpy as np
accuracy = Accuracy()
The first way is to directly call the instantiated Accuracy
object to calculate the metric.
labels = np.asarray([0, 1, 2, 3])
preds = np.asarray([0, 2, 1, 3])
accuracy(preds, labels)
# {'top1': 0.5}
The second way is to calculate the metric after accumulating data from multiple batches.
for i in range(10):
labels = np.random.randint(0, 4, size=(100, ))
predicts = np.random.randint(0, 4, size=(100, ))
accuracy.add(predicts, labels)
accuracy.compute()
# {'top1': ...}
We appreciate all contributions to improve MMEval. Please refer to CONTRIBUTING.md for the contributing guideline.
This project is released under the Apache 2.0 license.
No Description
Python Markdown CSV other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》