Wan_ 39519ae255 | 1 year ago | |
---|---|---|
.. | ||
__init__.py | 1 year ago | |
add_lowres_and_cascade.py | 1 year ago | |
cleanup_integration_test.py | 1 year ago | |
lsf_commands.sh | 1 year ago | |
prepare_integration_tests.sh | 1 year ago | |
readme.md | 1 year ago | |
run_integration_test.sh | 1 year ago | |
run_integration_test_bestconfig_inference.py | 1 year ago | |
run_integration_test_trainingOnly_DDP.sh | 1 year ago |
I am just a mortal with many tasks and limited time. Aint nobody got time for unittests.
HOWEVER, at least some integration tests should be performed testing nnU-Net from start to finish.
This test covers all possible labeling scenarios (standard labels, regions, ignore labels and regions with
ignore labels). It runs the entire nnU-Net pipeline from start to finish:
To speed things up, we do the following:
add_lowres_and_cascade.py
to learn more!Set your pwd to be the nnunet repo folder (the one where the nnunetv2
folder and the setup.py
are located!)
Now generate the 4 dummy datasets (ids 996, 997, 998, 999) from dataset 4. This will crash if you don't have Dataset004!
bash nnunetv2/tests/integration_tests/prepare_integration_tests.sh
Now you can run the integration test for each of the datasets:
bash nnunetv2/tests/integration_tests/run_integration_test.sh DATSET_ID
use DATSET_ID 996, 997, 998 and 999. You can run these independently on different GPUs/systems to speed things up.
This will take i dunno like 10-30 Minutes!?
Also run
bash nnunetv2/tests/integration_tests/run_integration_test_trainingOnly_DDP.sh DATSET_ID
to verify DDP is working (needs 2 GPUs!)
If I was not as lazy as I am I would have programmed some automatism that checks if Dice scores etc are in an acceptable range.
So you need to do the following:
nnUNet_results/DATASET_NAME
and take a look at the inference_information.json
file.Once the integration test is completed you can delete all the temporary files associated with it by running:
python nnunetv2/tests/integration_tests/cleanup_integration_test.py
No Description
Python Text Markdown Shell other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》