Based on PARL, the A2C algorithm of deep reinforcement learning has been reproduced, reaching the same level of indicators as the paper in Atari benchmarks.
A2C is a synchronous, deterministic variant of Asynchronous Advantage Actor Critic (A3C). Instead of updating asynchronously in A3C or GA3C, A2C uses a synchronous approach that waits for each actor to finish its sampling before performing an update. Since loss definition of these A3C variants are identical, we use a common a3c algotrithm parl.algorithms.A3C
for A2C and GA3C examples.
Please see here to know more about Atari games.
Mean episode reward in training process after 10 million sample steps.
Performance of A2C on various envrionments
At first, We can start a local cluster with 5 CPUs:
xparl start --port 8010 --cpu_num 5
Note that if you have started a master before, you don't have to run the above
command. For more information about the cluster, please refer to our
documentation
Then we can start the distributed training by running:
python train.py
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》