Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
zenghsh3 a7ac16df12 | 3 years ago | |
---|---|---|
.. | ||
final_submit | 4 years ago | |
image | 4 years ago | |
scripts | 4 years ago | |
README.md | 4 years ago | |
actor.py | 4 years ago | |
env_wrapper.py | 4 years ago | |
evaluate.py | 3 years ago | |
evaluate_args.py | 4 years ago | |
official_obs_scaler.npz | 4 years ago | |
opensim_agent.py | 4 years ago | |
opensim_model.py | 4 years ago | |
replay_memory.py | 3 years ago | |
train.py | 3 years ago | |
train_args.py | 4 years ago |
The PARL team gets the first place in NeurIPS reinforcement learning competition, again! This folder contains our final submitted model and the code relative to the training process.
How to Run
final_submit
b5ck
) or Google Drivetar zxvf saved_models.tar.gz
python test.py
The curriculum learning pipeline to get a walking slowly model is the same pipeline in our winning solution in NeurIPS 2018: AI for Prosthetics Challenge. You can get a walking slowly model by following the guide.
We also provide a pre-trained model that walk naturally at ~1.3m/s. You can download the model file (naming low_speed_model
) from online storage service: Baidu Pan (password: q9vj
) or Google Drive.
We built our distributed training agent based on PARL cluster. To start a PARL cluster, we can execute the following two xparl commands:
# starts a master node to manage computation resources and adds the local CPUs to the cluster. xparl start --port 8010
# if necessary, adds more CPUs (computation resources) in other machine to the cluster. xparl connect --address [CLUSTER_IP]:8010
For more information of xparl, please visit the documentation.
In this example, we can start a local cluster with 300 CPUs by running:
xparl start --port 8010 --cpu_num 300
Then, we can start the distributed training by running:
# NOTE: You need provide a self-trained model, or download the `low_speed_model` as mentioned above.
sh scripts/train_difficulty1.sh ./low_speed_model
Optionally, you can start the distributed evaluating by running:
sh scripts/eval_difficulty1.sh
sh scripts/train_difficulty2.sh [TRAINED DIFFICULTY=1 MODEL]
sh scripts/train_difficulty3_first_target.sh [TRAINED DIFFICULTY=2 MODEL]
sh scripts/train_difficulty3.sh [TRAINED DIFFICULTY=3 FIRST TARGET MODEL]
PARL 是一个高性能、灵活的强化学习框架
Python C++ JavaScript Markdown Shell other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》