luyi 75ecb14c94 | 1 year ago | |
---|---|---|
.. | ||
imgs | 1 year ago | |
results | 1 year ago | |
CMakeLists.txt | 1 year ago | |
README.md | 1 year ago | |
main.cpp | 1 year ago | |
nanodet_mnn.cpp | 1 year ago | |
nanodet_mnn.hpp | 1 year ago |
This fold provides NanoDet inference code using
Alibaba's MNN framework. Most of the implements in
this fold are same as demo_ncnn.
Just run:
pip install MNN
Please follow the official document to build MNN engine.
Export ONNX model
python tools/export_onnx.py --cfg_path ${CONFIG_PATH} --model_path ${PYTORCH_MODEL_PATH}
Convert to MNN
python -m MNN.tools.mnnconvert -f ONNX --modelFile sim.onnx --MNNModel nanodet.mnn
It should be note that the input size does not have to be fixed, it can be any integer multiple of strides,
since NanoDet is anchor free. We can adapt the shape of dummy_input
in ./tools/export_onnx.py to get ONNX and MNN models
with different input sizes.
Here are converted model
Download Link.
For C++ code, replace libMNN.so
under ./mnn/lib with the one you just compiled, modify OpenCV path at CMake file,
and run
mkdir build && cd build
cmake ..
make
Note that a flag at main.cpp
is used to control whether to show the detection result or save it into a fold.
#define __SAVE_RESULT__ // if defined save drawed results to ../results, else show it in windows
The multi-backend python demo is still working in progress.
C++ inference interface is same with NCNN code, to detect images in a fold, run:
./nanodet-mnn "1" "../imgs/*.jpg"
For speed benchmark
./nanodet-mnn "3" "0"
If you want to use custom model, please make sure the hyperparameters
in nanodet_mnn.h
are the same with your training config file.
int input_size[2] = {416, 416}; // input height and width
int num_class = 80; // number of classes. 80 for COCO
int reg_max = 7; // `reg_max` set in the training config. Default: 7.
std::vector<int> strides = { 8, 16, 32, 64 }; // strides of the multi-level feature.
Ultra-Light-Fast-Generic-Face-Detector-1MB
No Description
Python C++ Java Jupyter Notebook Gradle other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》