English | 中文
FastDeploy supports AI deployment on Rockchip Soc based on Paddle Lite backend. For more detailed information, please refer to: Paddle Lite Deployment Example.
This document describes how to compile the Paddle Lite based C++ FastDeploy cross-compilation library.
The relevant compilation options are described as follows:
Compile Options | Default Values | Description | Remarks |
---|---|---|---|
ENABLE_LITE_BACKEND | OFF | It needs to be set to ON when compiling the RK library | - |
WITH_TIMVX | OFF | It needs to be set to ON when compiling the RK library | - |
TARGET_ABI | NONE | It needs to be set to armhf when compiling the RK library | - |
For more compilation options, please refer to Description of FastDeploy compilation options
You can enter the FastDeploy/tools/timvx directory and use the following command to install:
cd FastDeploy/tools/timvx
bash install.sh
You can also install it with the following commands:
# 1. Install basic software
apt update
apt-get install -y --no-install-recommends \
gcc g++ git make wget python unzip
# 2. Install arm gcc toolchains
apt-get install -y --no-install-recommends \
g++-arm-linux-gnueabi gcc-arm-linux-gnueabi \
g++-arm-linux-gnueabihf gcc-arm-linux-gnueabihf \
gcc-aarch64-linux-gnu g++-aarch64-linux-gnu
# 3. Install cmake 3.10 or above
wget -c https://mms-res.cdn.bcebos.com/cmake-3.10.3-Linux-x86_64.tar.gz && \
tar xzf cmake-3.10.3-Linux-x86_64.tar.gz && \
mv cmake-3.10.3-Linux-x86_64 /opt/cmake-3.10 && \
ln -s /opt/cmake-3.10/bin/cmake /usr/bin/cmake && \
ln -s /opt/cmake-3.10/bin/ccmake /usr/bin/ccmake
After setting up the cross-compilation environment, the compilation command is as follows:
# Download the latest source code
git clone https://github.com/PaddlePaddle/FastDeploy.git
cd FastDeploy
mkdir build && cd build
# CMake configuration with RK toolchain
cmake -DCMAKE_TOOLCHAIN_FILE=./../cmake/toolchain.cmake \
-DWITH_TIMVX=ON \
-DTARGET_ABI=armhf \
-DENABLE_FLYCV=ON \ # Whether to enable FlyCV optimization
-DCMAKE_INSTALL_PREFIX=fastdeploy-timvx \
-DENABLE_VISION=ON \ # Whether to compile the vision module
-Wno-dev ..
# Build FastDeploy RV1126 C++ SDK
make -j8
make install
After the compilation is complete, the fastdeploy-timvx directory will be generated, indicating that the FastDeploy library based on Paddle Lite TIM-VX has been compiled.
Before deployment, ensure that the version of the driver galcore.so of the Verisilicon Linux Kernel NPU meets the requirements. Before deployment, please log in to the development board, and enter the following command through the command line to query the NPU driver version. The recommended version of the Rockchip driver is: 6.4.6.5
dmesg | grep Galcore
If the current version does not comply with the above, please read the following content carefully to ensure that the underlying NPU driver environment is correct.
There are two ways to modify the current NPU driver version:
wget https://paddlelite-demo.bj.bcebos.com/devices/generic/PaddleLite-generic-demo.tar.gz
tar -xf PaddleLite-generic-demo.tar.gz
uname -a
to check Linux Kernel
version, it is determined to be version 4.19.111.galcore.ko
under PaddleLite-generic-demo/libs/PaddleLite/linux/armhf/lib/verisilicon_timvx/viv_sdk_6_4_6_5/lib/1126/4.19.111/
path to the development board.sudo rmmod galcore
on the command line to uninstall the original driver, and enter sudo insmod galcore.ko
to load the uploaded device driver. (Whether sudo is needed depends on the actual situation of the development board. For some adb-linked devices, please adb root in advance). If this step fails, go to method 2.dmesg | grep Galcore
in the development board to query the NPU driver version, and it is determined to be: 6.4.6.5According to the specific development board model, ask the development board seller or the official website customer service for the firmware and flashing method corresponding to the 6.4.6.5 version of the NPU driver.
For more details, please refer to: Paddle Lite prepares the device environment
For deploying the PaddleClas classification model on RV1126, please refer to: C++ deployment example of PaddleClas classification model on RV1126
For deploying PPYOLOE detection model on RV1126, please refer to: C++ deployment example of PPYOLOE detection model on RV1126
For deploying YOLOv5 detection model on RV1126, please refer to: C++ Deployment Example of YOLOv5 Detection Model on RV1126
For deploying PP-LiteSeg segmentation model on RV1126, please refer to: C++ Deployment Example of PP-LiteSeg Segmentation Model on RV1126
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》