Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
cao b4a0c7f881 | 1 year ago | |
---|---|---|
LICENSE.txt | 1 year ago | |
README.md | 1 year ago | |
agent.py | 1 year ago | |
maddpg-master.iml | 1 year ago | |
main.py | 1 year ago | |
misc.xml | 1 year ago | |
modules.xml | 1 year ago | |
runner.py | 1 year ago | |
setup.py | 1 year ago | |
vcs.xml | 1 year ago |
This is a pytorch implementation of MADDPG on Multi-Agent Particle Environment(MPE), the corresponding paper of MADDPG is Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments.
$ python main.py --scenario-name=simple_tag --evaluate-episodes=10
Directly run the main.py, then the algrithm will be tested on scenario 'simple_tag' for 10 episodes, using the pretrained model.
We have train the agent on scenario 'simple_tag', but the model we provide is not the best because we don't want to waste time on training, you can keep training it for better performence.
There are 4 agents in simple_tag, including 3 predators and 1 prey. we use MADDPG to train predators to catch the prey. The prey's action can be controlled by you, in our case we set it random.
The default setting of Multi-Agent Particle Environment(MPE) is sparse reward, you can change it to dense reward by replacing 'shape=False' to 'shape=True' in file multiagent-particle-envs/multiagent/scenarios/simple_tag.py/.
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》