使用有SimpleServing功能的PaddleNLP版本
paddlenlp server server_seq_cls:app --host 0.0.0.0 --port 8189
python client_seq_cls.py --dataset afqmc
paddlenlp server server_token_cls:app --host 0.0.0.0 --port 8189
python client_token_cls.py
paddlenlp server server_qa:app --host 0.0.0.0 --port 8189
python client_qa.py
可以在client端设置 max_seq_len
, batch_size
参数
data = {
'data': {
'text': texts,
'text_pair': text_pairs if len(text_pairs) > 0 else None
},
'parameters': {
'max_seq_len': args.max_seq_len,
'batch_size': args.batch_size
}
}
Dear OpenI User
Thank you for your continuous support to the Openl Qizhi Community AI Collaboration Platform. In order to protect your usage rights and ensure network security, we updated the Openl Qizhi Community AI Collaboration Platform Usage Agreement in January 2024. The updated agreement specifies that users are prohibited from using intranet penetration tools. After you click "Agree and continue", you can continue to use our services. Thank you for your cooperation and understanding.
For more agreement content, please refer to the《Openl Qizhi Community AI Collaboration Platform Usage Agreement》