Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
lin-yuxiang 9801e2589e | 2 years ago | |
---|---|---|
docs | 2 years ago | |
dolphin | 2 years ago | |
image | 2 years ago | |
requirements | 2 years ago | |
scripts | 2 years ago | |
.gitignore | 2 years ago | |
.readthedocs.yml | 2 years ago | |
LICENSE | 2 years ago | |
README.md | 2 years ago | |
requirements.txt | 2 years ago | |
setup.py | 2 years ago |
Documentation: https://open-dolphin.readthedocs.io
Dolphin is an open-source computer vision algorithm framework in fields of Object Detection, Semantic Segmentation, Video Action Analysis, Mono Depth Estimation, Generative Adversarial Networks and Activate Learning.
The code was tested under environment of python 3.6, ubuntu 16.04 and CUDA 10.0+. It's recommended to create virtual environment using Conda:
conda create --name dolphin python=3.6
Then clone this repo and install the prerequisites by
pip install -r $(DOLPHIN_ROOT)/requirements.txt
(optional) pip install git+https://github.com/philferriere/cocoapi.git#subdirectory=PythonAPI
Last step is to install whole project with
python setup.py install (or develop)
Firstly create a corresponding directory in the root dir of project, for example
mkdir $(DOLPHIN_ROOT)/data/depth
then run the data creating script at
$(DOLPHIN_ROOT)/scripts/depth/create.sh
to download dataset and pretrained models if needed. These paths should be assigned in the configuration file for training or testing.
NOTE: For task of activate learning, we use MNIST dataset which can be downloaded by torchvision automatically, so it's not necessary to run the create script for dataest, but in the configuration file the path of data_prefix should be set up to indicate the data location.
In the file of
$(DOLPHIN_ROOT)/dolphin/utils/registry.py
the modules (or submodules) needed to import should be added to every module file lists.
For instance: as for the task of Depth Estimation, model modules include backbone, head and decoder, so their filename has been added to the list of MODEL_MODULES
. And its algorithm module (the module that takes in charge of combination of every model module) and engine module (specific engine module which includes testing method) have also added into the list of ALGORITHM_MODULE
and ENGINE_MODULE
correspondingly. This step ensures all the components are callable for the algorithm.
NOTE: Configuration of importing modules can refer to file yaml (configuration file, located in dir $(DOLPHIN_ROOT)/dolphin/configs
).
It is also necessary to add parameters of every module into the configuration file, located in
$(DOLPHIN_ROOT)/dolphin/configs/
the FCRN algorithm for mono depth estimation is an example for it.
The hierarchy of configuraiton file consists of several four parts:
engine, algorithm, data, runtime.
As the name suggests, the part of engine, algorithm and data are used for specifying corresponding modules. The rest part, runtime, serves as the role to controll workflow phase: learning rate, total epochs and work dir path can be put up here. Additionally, the file logger.yaml on configuration directory is used to set up the logger name, logger file path and so on.
After finishing above steps, it's quickly to run python $(DOLPHIN_ROOT)/dolphin/main.py --config $(CONFIGURATION FILE PATH)
on the terminal to start the task.
This project is released under Open-Intelligence Open Source License V1.1.
This repo is mainly inspired by MMDetection,
PaddlePaddle CV and
Delta. We thanks the authors for the incredible work.
No Description
Python C++ Cuda Shell
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》