Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
王一凡 9fa7fff355 | 1 year ago | |
---|---|---|
common | 1 year ago | |
config | 1 year ago | |
cvnets | 1 year ago | |
data | 1 year ago | |
docs | 1 year ago | |
engine | 1 year ago | |
examples/range_augment | 1 year ago | |
loss_fn | 1 year ago | |
loss_landscape | 1 year ago | |
metrics | 1 year ago | |
optim | 1 year ago | |
options | 1 year ago | |
tests | 1 year ago | |
utils | 1 year ago | |
.gitignore | 1 year ago | |
ACKNOWLEDGEMENTS | 1 year ago | |
CODE_OF_CONDUCT.md | 1 year ago | |
CONTRIBUTING.md | 1 year ago | |
LICENSE | 1 year ago | |
README.md | 1 year ago | |
main_benchmark.py | 1 year ago | |
main_conversion.py | 1 year ago | |
main_eval.py | 1 year ago | |
main_loss_landscape.py | 1 year ago | |
main_train.py | 1 year ago | |
requirements.txt | 1 year ago | |
requirements_docs.txt | 1 year ago | |
setup.py | 1 year ago |
CVNets is a computer vision toolkit that allows researchers and engineers to train standard and novel mobile-
and non-mobile computer vision models for variety of tasks, including object classification, object detection,
semantic segmentation, and foundation models (e.g., CLIP).
We recommend to use Python 3.8+ and PyTorch (version >= v1.12.0)
Instructions below use Conda, if you don't have Conda installed, you can check out How to Install Conda.
# Clone the repo
git clone git@github.com:apple/ml-cvnets.git
cd ml-cvnets
# Create a virtual env. We use Conda
conda create -n cvnets python=3.8
conda activate cvnets
# install requirements and CVNets package
pip install -r requirements.txt
pip install --editable .
To see a list of available models and benchmarks, please refer to Model Zoo and examples folder.
This code is developed by Sachin Mehta, and is now maintained by Sachin, Farzad Abdolhosseini, and Maxwell Horton.
Below is the list of publications from Apple that uses CVNets:
We welcome PRs from the community! You can find information about contributing to CVNets in our contributing document.
Please remember to follow our Code of Conduct.
For license details, see LICENSE.
If you find our work useful, please cite the following paper:
@inproceedings{mehta2022mobilevit,
title={MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer},
author={Sachin Mehta and Mohammad Rastegari},
booktitle={International Conference on Learning Representations},
year={2022}
}
@inproceedings{mehta2022cvnets,
author = {Mehta, Sachin and Abdolhosseini, Farzad and Rastegari, Mohammad},
title = {CVNets: High Performance Library for Computer Vision},
year = {2022},
booktitle = {Proceedings of the 30th ACM International Conference on Multimedia},
series = {MM '22}
}
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》