Are you sure you want to delete this task? Once this task is deleted, it cannot be recovered.
Hongsheng Zeng ee07ec2166 | 4 years ago | |
---|---|---|
.github | 5 years ago | |
.teamcity | 4 years ago | |
docs | 4 years ago | |
examples | 4 years ago | |
papers | 4 years ago | |
parl | 4 years ago | |
.copyright.hook | 5 years ago | |
.gitignore | 5 years ago | |
.pre-commit-config.yaml | 5 years ago | |
.readthedocs.yml | 4 years ago | |
.travis.yml | 5 years ago | |
CMakeLists.txt | 4 years ago | |
LICENSE | 6 years ago | |
README.cn.md | 4 years ago | |
README.md | 4 years ago | |
setup.py | 4 years ago |
English | 简体中文
PARL is a flexible and high-efficient reinforcement learning framework.
Reproducible. We provide algorithms that stably reproduce the result of many influential reinforcement learning algorithms.
Large Scale. Ability to support high-performance parallelization of training with thousands of CPUs and multi-GPUs.
Reusable. Algorithms provided in the repository could be directly adapted to a new task by defining a forward network and training mechanism will be built automatically.
Extensible. Build new algorithms quickly by inheriting the abstract class in the framework.
Model
is abstracted to construct the forward network which defines a policy network or critic network given state as input.
Algorithm
describes the mechanism to update parameters in Model
and often contains at least one model.
Agent
, a data bridge between the environment and the algorithm, is responsible for data I/O with the outside environment and describes data preprocessing before feeding data into the training process.
Here is an example of building an agent with DQN algorithm for Atari games.
import parl
from parl.algorithms import DQN, DDQN
class AtariModel(parl.Model):
"""AtariModel
This class defines the forward part for an algorithm,
its input is state observed on the environment.
"""
def __init__(self, img_shape, action_dim):
# define your layers
self.cnn1 = layers.conv_2d(num_filters=32, filter_size=5,
stride=1, padding=2, act='relu')
...
self.fc1 = layers.fc(action_dim)
def value(self, img):
# define how to estimate the Q value based on the image of atari games.
img = img / 255.0
l = self.cnn1(img)
...
Q = self.fc1(l)
return Q
"""
three steps to build an agent
1. define a forward model which is critic_model in this example
2. a. to build a DQN algorithm, just pass the critic_model to `DQN`
b. to build a DDQN algorithm, just replace DQN in the following line with DDQN
3. define the I/O part in AtariAgent so that it could update the algorithm based on the interactive data
"""
model = AtariModel(img_shape=(32, 32), action_dim=4)
algorithm = DQN(model)
agent = AtariAgent(algorithm)
PARL provides a compact API for distributed training, allowing users to transfer the code into a parallelized version by simply adding a decorator.
Here is a Hello World
example to demonstrate how easy it is to leverage outer computation resources.
#============Agent.py=================
@parl.remote_class
class Agent(object):
def say_hello(self):
print("Hello World!")
def sum(self, a, b):
return a+b
# launch `Agent.py` at any computation platforms such as a CPU cluster.
if __main__ == '__main__':
agent = Agent()
agent.as_remote(server_address)
#============Server.py=================
remote_manager = parl.RemoteManager()
agent = remote_manager.get_remote()
agent.say_hello()
ans = agent.sum(1,5) # run remotely and not consume any local computation resources
Two steps to use outer computation resources:
parl.remote_class
to decorate a class at first, after which it is transferred to be a new class that can run in other CPUs or machines.RemoteManager
, and these objects have the same functions as the real ones. However, calling any function of these objects does not consume local computation resources since they are executed elsewhere.For users, they can write code in a simple way, just like writing multi-thread code, but with actors consuming remote resources. We have also provided examples of parallized algorithms like IMPALA, A2C and GA3C. For more details in usage please refer to these examples.
pip install parl
PARL 是一个高性能、灵活的强化学习框架
Python C++ JavaScript Shell Markdown other
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》