@@ -1,29 +0,0 @@ | |||
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files. | |||
*.cfg | |||
*venv* | |||
*pycache* | |||
# dependencies | |||
/node_modules | |||
/.pnp | |||
.pnp.js | |||
# testing | |||
/coverage | |||
# production | |||
/build | |||
# misc | |||
.DS_Store | |||
.env.local | |||
.env.development.local | |||
.env.test.local | |||
.env.production.local | |||
.env | |||
npm-debug.log* | |||
yarn-debug.log* | |||
yarn-error.log* | |||
/env |
@@ -1,21 +0,0 @@ | |||
MIT License | |||
Copyright (c) 2020 Shreya Shankar | |||
Permission is hereby granted, free of charge, to any person obtaining a copy | |||
of this software and associated documentation files (the "Software"), to deal | |||
in the Software without restriction, including without limitation the rights | |||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | |||
copies of the Software, and to permit persons to whom the Software is | |||
furnished to do so, subject to the following conditions: | |||
The above copyright notice and this permission notice shall be included in all | |||
copies or substantial portions of the Software. | |||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | |||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | |||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | |||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | |||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | |||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE | |||
SOFTWARE. |
@@ -1,97 +0,0 @@ | |||
# Yuan 1.0 Sandbox: Turn your ideas into demos in a matter of minutes | |||
Yuan 1.0 sandbox is developed on the basis of GPT-3 sandbox tool. | |||
Initial release date: 12 March 2022 | |||
Note that this repository is not under any active development; just basic maintenance. | |||
## Description | |||
The goal of this project is to enable users to create cool web demos using the newly released Inspur Yuan 1.0 API **with just a few lines of Python.** | |||
This project addresses the following issues: | |||
1. Automatically formatting a user's inputs and outputs so that the model can effectively pattern-match | |||
2. Creating a web app for a user to deploy locally and showcase their idea | |||
Here's a quick example of priming Yuan to build a chatbot: | |||
``` | |||
# Construct Yuan object and show some examples | |||
yuan = Yuan(input_prefix="对话:“", | |||
input_suffix="”", | |||
output_prefix="答:“", | |||
output_suffix="”", | |||
append_output_prefix_to_query=False) | |||
yuan.add_example(Example(inp="对百雅轩798艺术中心有了解吗?", | |||
out="有些了解,它位于北京798艺术区,创办于2003年。")) | |||
yuan.add_example(Example(inp="不过去这里我不知道需不需要门票?",out="我知道,不需要,是免费开放。")) | |||
yuan.add_example(Example(inp="你还可以到它边上的观复博物馆看看,我觉得那里很不错。",out="观复博物馆我知道,是马未都先生创办的新中国第一家私立博物馆。")) | |||
# Define UI configuration | |||
config = UIConfig(description="旅游问答机器人", | |||
button_text="回答", | |||
placeholder="故宫里有什么好玩的?", | |||
show_example_form=True) | |||
demo_web_app(yuan, config) | |||
``` | |||
Running this code as a python script would automatically launch a web app for you to test new inputs and outputs with. There are already 3 example scripts in the `examples` directory. | |||
You can also prime Yuan from the UI. for that, pass `show_example_form=True` to `UIConfig` along with other parameters. | |||
Technical details: the backend is in Flask, and the frontend is in React. Note that this repository is currently not intended for production use. | |||
## Background | |||
Yuan 1.0 ([Shawn et al.](https://arxiv.org/abs/2110.04725)) is Inspur's latest chinese language model. In this work, we propose a method that incorporates large-scale distributed training performance into model architecture design. With this method, we trained Yuan 1.0, the current largest singleton language model with 246B parameters, which achieved excellent performance on thousands GPUs, and state-of-the-art results on different natural language processing tasks. | |||
Please visit [official website](http://air.inspur.com) (http://air.inspur.com) for details to get access of the corpus and APIs of Yuan model. | |||
## Requirements | |||
Coding-wise, you only need Python. But for the app to run, you will need: | |||
* API key from the Inspur Yuan1.0 API invite | |||
* Python 3 | |||
* `yarn` | |||
Instructions to install Python 3 are [here](https://realpython.com/installing-python/), and instructions to install `yarn` are [here](https://classic.yarnpkg.com/en/docs/install/#mac-stable). | |||
## Setup | |||
First, clone or fork this repository. Then to set up your virtual environment, do the following: | |||
1. Create a virtual environment in the root directory: `python -m venv $ENV_NAME` | |||
2. Activate the virtual environment: ` source $ENV_NAME/bin/activate` (for MacOS, Unix, or Linux users) or ` .\ENV_NAME\Scripts\activate` (for Windows users) | |||
3. Install requirements: `pip install -r api/requirements.txt` | |||
4. To add your secret key: create a python file for your demo, and set your account and phone number on the begining of the file: `set_yuan_account("account", "phone")`, where `$account` and `$phone` are registed on our official website and obtained the authorization. If you are unsure whether you have the access permission, navigate to the [official website](http://air.inspur.com) to check the state. | |||
5. Run `yarn install` in the root directory | |||
If you are a Windows user, to run the demos, you will need to modify the following line inside `api/demo_web_app.py`: | |||
`subprocess.Popen(["yarn", "start"])` to `subprocess.Popen(["yarn", "start"], shell=True)` | |||
To verify that your environment is set up properly, run one of the 3 scripts in the `examples` directory: | |||
`python examples/run_dialog.py` | |||
A new tab should pop up in your browser, and you should be able to interact with the UI! To stop this app, run ctrl-c or command-c in your terminal. | |||
## Contributions | |||
We actively encourage people to contribute by adding their own examples or even adding functionalities to the modules. Please make a pull request if you would like to add something, or create an issue if you have a question. We will update the contributors list on a regular basis. | |||
Please *do not* leave your secret key in plaintext in your pull request! | |||
## Authors | |||
We thank the original authors of GPT-3 sandbox tool: | |||
* Shreya Shankar | |||
* Bora Uyumazturk | |||
* Devin Stein | |||
* Gulan | |||
* Michael Lavelle | |||
@@ -1,113 +0,0 @@ | |||
"""Runs the web app given a GPT object and UI configuration.""" | |||
from http import HTTPStatus | |||
import json | |||
import subprocess | |||
# import openai | |||
from flask import Flask, request, Response | |||
# from .gpt import set_openai_key, Example | |||
from .inspurai import Example | |||
from .ui_config import UIConfig | |||
CONFIG_VAR = "OPENAI_CONFIG" | |||
KEY_NAME = "OPENAI_KEY" | |||
def demo_web_app(gpt, config=UIConfig()): | |||
"""Creates Flask app to serve the React app.""" | |||
app = Flask(__name__) | |||
# app.config.from_envvar(CONFIG_VAR) | |||
# set_openai_key(app.config[KEY_NAME]) | |||
@app.route("/params", methods=["GET"]) | |||
def get_params(): | |||
# pylint: disable=unused-variable | |||
response = config.json() | |||
return response | |||
def error(err_msg, status_code): | |||
return Response(json.dumps({"error": err_msg}), status=status_code) | |||
def get_example(example_id): | |||
"""Gets a single example or all the examples.""" | |||
# return all examples | |||
if not example_id: | |||
return json.dumps(gpt.get_all_examples()) | |||
example = gpt.get_example(example_id) | |||
if not example: | |||
return error("id not found", HTTPStatus.NOT_FOUND) | |||
return json.dumps(example.as_dict()) | |||
def post_example(): | |||
"""Adds an empty example.""" | |||
new_example = Example("", "") | |||
gpt.add_example(new_example) | |||
return json.dumps(gpt.get_all_examples()) | |||
def put_example(args, example_id): | |||
"""Modifies an existing example.""" | |||
if not example_id: | |||
return error("id required", HTTPStatus.BAD_REQUEST) | |||
example = gpt.get_example(example_id) | |||
if not example: | |||
return error("id not found", HTTPStatus.NOT_FOUND) | |||
if "input" in args: | |||
example.input = args["input"] | |||
if "output" in args: | |||
example.output = args["output"] | |||
# update the example | |||
gpt.add_example(example) | |||
return json.dumps(example.as_dict()) | |||
def delete_example(example_id): | |||
"""Deletes an example.""" | |||
if not example_id: | |||
return error("id required", HTTPStatus.BAD_REQUEST) | |||
gpt.delete_example(example_id) | |||
return json.dumps(gpt.get_all_examples()) | |||
@app.route( | |||
"/examples", | |||
methods=["GET", "POST"], | |||
defaults={"example_id": ""}, | |||
) | |||
@app.route( | |||
"/examples/<example_id>", | |||
methods=["GET", "PUT", "DELETE"], | |||
) | |||
def examples(example_id): | |||
method = request.method | |||
args = request.json | |||
if method == "GET": | |||
return get_example(example_id) | |||
if method == "POST": | |||
return post_example() | |||
if method == "PUT": | |||
return put_example(args, example_id) | |||
if method == "DELETE": | |||
return delete_example(example_id) | |||
return error("Not implemented", HTTPStatus.NOT_IMPLEMENTED) | |||
@app.route("/translate", methods=["GET", "POST"]) | |||
def translate(): | |||
# pylint: disable=unused-variable | |||
prompt = request.json["prompt"] | |||
# response = gpt.submit_request(prompt) | |||
response = gpt.submit_API(prompt=prompt, trun=gpt.output_suffix) | |||
offset = 0 | |||
# if not gpt.append_output_prefix_to_query: | |||
# offset = len(gpt.output_prefix) | |||
# return {'text': response['choices'][0]['text'][offset:]} | |||
return {'text': response[offset:]} | |||
subprocess.Popen(["yarn", "start"], shell=True) | |||
app.run() | |||
# app.run(host='0.0.0.0', port=5005) |
@@ -1,187 +0,0 @@ | |||
# import requests | |||
# import hashlib | |||
# import time | |||
# import json | |||
import os | |||
import uuid | |||
from api.url_config import submit_request, reply_request | |||
def set_yuan_account(user, phone): | |||
os.environ['YUAN_ACCOUNT'] = user + '||' + phone | |||
class Example: | |||
""" store some examples(input, output pairs and formats) for few-shots to prime the model.""" | |||
def __init__(self, inp, out): | |||
self.input = inp | |||
self.output = out | |||
self.id = uuid.uuid4().hex | |||
def get_input(self): | |||
"""return the input of the example.""" | |||
return self.input | |||
def get_output(self): | |||
"""Return the output of the example.""" | |||
return self.output | |||
def get_id(self): | |||
"""Returns the unique ID of the example.""" | |||
return self.id | |||
def as_dict(self): | |||
return { | |||
"input": self.get_input(), | |||
"output": self.get_output(), | |||
"id": self.get_id(), | |||
} | |||
class Yuan: | |||
"""The main class for a user to interface with the Inspur yuan_api. | |||
A user can set account info and add examples of the API request. | |||
""" | |||
def __init__(self, | |||
engine='base_10B', | |||
temperature=0.9, | |||
max_tokens=40, | |||
input_prefix='', | |||
input_suffix='\n', | |||
output_prefix='答:', | |||
output_suffix='\n\n', | |||
append_output_prefix_to_query=False, | |||
topK=1, | |||
topP=0.9, | |||
frequencyPenalty=1.0, | |||
responsePenalty=1.0, | |||
noRepeatNgramSize=0): | |||
self.examples = {} | |||
self.engine = engine | |||
self.temperature = temperature | |||
self.max_tokens = max_tokens | |||
self.topK = topK | |||
self.topP = topP | |||
self.frequencyPenalty = frequencyPenalty | |||
self.responsePenalty = responsePenalty | |||
self.noRepeatNgramSize = noRepeatNgramSize | |||
self.input_prefix = input_prefix | |||
self.input_suffix = input_suffix | |||
self.output_prefix = output_prefix | |||
self.output_suffix = output_suffix | |||
self.append_output_prefix_to_query = append_output_prefix_to_query | |||
self.stop = (output_suffix + input_prefix).strip() | |||
# if self.engine not in ['base_10B','translate','dialog']: | |||
# raise Exception('engine must be one of [\'base_10B\',\'translate\',\'dialog\'] ') | |||
def add_example(self, ex): | |||
"""Add an example to the object. | |||
Example must be an instance of the Example class.""" | |||
assert isinstance(ex, Example), "Please create an Example object." | |||
self.examples[ex.get_id()] = ex | |||
def delete_example(self, id): | |||
"""Delete example with the specific id.""" | |||
if id in self.examples: | |||
del self.examples[id] | |||
def get_example(self, id): | |||
"""Get a single example.""" | |||
return self.examples.get(id, None) | |||
def get_all_examples(self): | |||
"""Returns all examples as a list of dicts.""" | |||
return {k: v.as_dict() for k, v in self.examples.items()} | |||
def get_prime_text(self): | |||
"""Formats all examples to prime the model.""" | |||
return "".join( | |||
[self.format_example(ex) for ex in self.examples.values()]) | |||
def get_engine(self): | |||
"""Returns the engine specified for the API.""" | |||
return self.engine | |||
def get_temperature(self): | |||
"""Returns the temperature specified for the API.""" | |||
return self.temperature | |||
def get_max_tokens(self): | |||
"""Returns the max tokens specified for the API.""" | |||
return self.max_tokens | |||
def craft_query(self, prompt): | |||
"""Creates the query for the API request.""" | |||
q = self.get_prime_text( | |||
) + self.input_prefix + prompt + self.input_suffix | |||
if self.append_output_prefix_to_query: | |||
q = q + self.output_prefix | |||
return q | |||
def format_example(self, ex): | |||
"""Formats the input, output pair.""" | |||
return self.input_prefix + ex.get_input( | |||
) + self.input_suffix + self.output_prefix + ex.get_output( | |||
) + self.output_suffix | |||
def response(self, | |||
query, | |||
engine='', | |||
max_tokens=20, | |||
temperature=0.9, | |||
topP=0.1, | |||
topK=1, | |||
frequencyPenalty=1.0, | |||
responsePenalty=1.0, | |||
noRepeatNgramSize=0): | |||
"""Obtains the original result returned by the API.""" | |||
try: | |||
# requestId = submit_request(query,temperature,topP,topK,max_tokens,engine) | |||
requestId = submit_request(query, temperature, topP, topK, max_tokens, engine, frequencyPenalty, | |||
responsePenalty, noRepeatNgramSize) | |||
response_text = reply_request(requestId) | |||
except Exception as e: | |||
raise e | |||
return response_text | |||
def del_special_chars(self, msg): | |||
special_chars = ['<unk>', '<eod>', '#', '▃', '▁', '▂', ' '] | |||
for char in special_chars: | |||
msg = msg.replace(char, '') | |||
return msg | |||
def submit_API(self, prompt, trun='▃'): | |||
"""Submit prompt to yuan API interface and obtain an pure text reply. | |||
:prompt: Question or any content a user may input. | |||
:return: pure text response.""" | |||
query = self.craft_query(prompt) | |||
res = self.response(query,engine=self.engine, | |||
max_tokens=self.max_tokens, | |||
temperature=self.temperature, | |||
topP=self.topP, | |||
topK=self.topK, | |||
frequencyPenalty = self.frequencyPenalty, | |||
responsePenalty = self.responsePenalty, | |||
noRepeatNgramSize = self.noRepeatNgramSize) | |||
txt = res['resData'] | |||
# 单独针对翻译模型的后处理 | |||
if self.engine == 'translate': | |||
txt = txt.replace(' ##', '').replace(' "', '"').replace(": ", ":").replace(" ,", ",") \ | |||
.replace('英文:', '').replace('文:', '').replace("( ", "(").replace(" )", ")") | |||
else: | |||
txt = txt.replace(' ', '') | |||
txt = self.del_special_chars(txt) | |||
if trun: | |||
try: | |||
txt = txt[:txt.index(trun)] | |||
except: | |||
return txt | |||
return txt | |||
@@ -1,16 +0,0 @@ | |||
astroid==2.4.2 | |||
certifi==2020.6.20 | |||
chardet==3.0.4 | |||
click==7.1.2 | |||
Flask==1.1.2 | |||
idna==2.10 | |||
itsdangerous==1.1.0 | |||
Jinja2==2.11.3 | |||
MarkupSafe==1.1.1 | |||
openai==0.2.4 | |||
pylint==2.5.3 | |||
python-dotenv==0.14.0 | |||
requests==2.27.1 | |||
six==1.15.0 | |||
urllib3==1.26.5 | |||
Werkzeug==1.0.1 |
@@ -1,37 +0,0 @@ | |||
"""Class to store customized UI parameters.""" | |||
class UIConfig(): | |||
"""Stores customized UI parameters.""" | |||
def __init__(self, description='Description', | |||
button_text='Submit', | |||
placeholder='Default placeholder', | |||
show_example_form=False): | |||
self.description = description | |||
self.button_text = button_text | |||
self.placeholder = placeholder | |||
self.show_example_form = show_example_form | |||
def get_description(self): | |||
"""Returns the input of the example.""" | |||
return self.description | |||
def get_button_text(self): | |||
"""Returns the intended output of the example.""" | |||
return self.button_text | |||
def get_placeholder(self): | |||
"""Returns the intended output of the example.""" | |||
return self.placeholder | |||
def get_show_example_form(self): | |||
"""Returns whether editable example form is shown.""" | |||
return self.show_example_form | |||
def json(self): | |||
"""Used to send the parameter values to the API.""" | |||
return {"description": self.description, | |||
"button_text": self.button_text, | |||
"placeholder": self.placeholder, | |||
"show_example_form": self.show_example_form} |
@@ -1,70 +0,0 @@ | |||
import requests | |||
import hashlib | |||
import time | |||
import json | |||
import os | |||
ACCOUNT = '' | |||
PHONE = '' | |||
SUBMIT_URL = "http://api-air.inspur.com:32102/v1/interface/api/infer/getRequestId?" | |||
REPLY_URL = "http://api-air.inspur.com:32102/v1/interface/api/result?" | |||
def code_md5(str): | |||
code=str.encode("utf-8") | |||
m = hashlib.md5() | |||
m.update(code) | |||
result= m.hexdigest() | |||
return result | |||
def rest_get(url, header,timeout, show_error=False): | |||
'''Call rest get method''' | |||
try: | |||
response = requests.get(url, headers=header,timeout=timeout, verify=False) | |||
return response | |||
except Exception as exception: | |||
if show_error: | |||
print(exception) | |||
return None | |||
def header_generation(): | |||
"""Generate header for API request.""" | |||
t=time.strftime("%Y-%m-%d", time.localtime()) | |||
global ACCOUNT, PHONE | |||
ACCOUNT, PHONE = os.environ.get('YUAN_ACCOUNT').split('||') | |||
token=code_md5(ACCOUNT+PHONE+t) | |||
headers = {'token': token} | |||
return headers | |||
def submit_request(query,temperature,topP,topK,max_tokens,engine, frequencyPenalty,responsePenalty,noRepeatNgramSize): | |||
"""Submit query to the backend server and get requestID.""" | |||
headers=header_generation() | |||
# url=SUBMIT_URL + "account={0}&data={1}&temperature={2}&topP={3}&topK={4}&tokensToGenerate={5}&type={6}".format(ACCOUNT,query,temperature,topP,topK,max_tokens,"api") | |||
# url=SUBMIT_URL + "engine={0}&account={1}&data={2}&temperature={3}&topP={4}&topK={5}&tokensToGenerate={6}" \ | |||
# "&type={7}".format(engine,ACCOUNT,query,temperature,topP,topK, max_tokens,"api") | |||
url=SUBMIT_URL + "engine={0}&account={1}&data={2}&temperature={3}&topP={4}&topK={5}&tokensToGenerate={6}" \ | |||
"&type={7}&frequencyPenalty={8}&responsePenalty={9}&noRepeatNgramSize={10}".\ | |||
format(engine,ACCOUNT,query,temperature,topP,topK, max_tokens,"api", frequencyPenalty,responsePenalty,noRepeatNgramSize) | |||
response=rest_get(url,headers,30,show_error=True) | |||
response_text = json.loads(response.text) | |||
if response_text["flag"]: | |||
requestId = response_text["resData"] | |||
return requestId | |||
else: | |||
raise RuntimeWarning(response_text) | |||
def reply_request(requestId,cycle_count=5): | |||
"""Check reply API to get the inference response.""" | |||
url = REPLY_URL + "account={0}&requestId={1}".format(ACCOUNT, requestId) | |||
headers=header_generation() | |||
for i in range(cycle_count): | |||
response = rest_get(url, headers, 30, show_error=True) | |||
response_text = json.loads(response.text) | |||
if response_text["resData"] != None: | |||
return response_text | |||
if response_text["flag"] == False and i ==cycle_count-1: | |||
raise RuntimeWarning(response_text) | |||
time.sleep(3) | |||
@@ -1,55 +0,0 @@ | |||
# Getting Started | |||
## Creating a GPT-3 Powered Web App | |||
Note: This is a work in progress, but the essential functions are described here. | |||
First, you will want to create a `GPT` object, which optionally acceps the parameters `engine`, `temperature`, and `max_tokens` (otherwise defaults to values in the following snippet): | |||
``` | |||
from api import GPT | |||
gpt = GPT(engine="davinci", | |||
temperature=0.5, | |||
max_tokens=100) | |||
``` | |||
Since we're mainly interested in constructing a demo, we do not provide an interface for you to change other parameters. Feel free to fork the repository and make as many changes to the code as you would like. | |||
Once the `GPT` object is created, you need to "prime" it with several examples. The goal of these examples are to show the model some patterns that you are hoping for it to recognize. The `Example` constructor accepts an input string and a corresponding output string. To construct an `Example`, you can run the following code: | |||
``` | |||
from api import Example | |||
ex = Example(input="Hello", output="Hola") | |||
``` | |||
After constructing some examples, you can add them to your `GPT` object by calling the `add_example` method, which only accepts an `Example`: | |||
``` | |||
gpt.add_example(ex) | |||
``` | |||
Finally, once you've added all of your examples, it's time to run the demo! But first, in order to customize the web app to your idea, you can optionally create a `UIConfig` with `description`, `button_text`, and `placeholder` (text initially shown in the input box) parameters: | |||
``` | |||
from api import UIConfig | |||
config = UIConfig(description="Analogies generator", | |||
button_text="Generate", | |||
placeholder="Memes are like") | |||
``` | |||
Now you can run the web app! Call the `demo_web_app` with your `GPT` and (optional) `UIConfig` objects: | |||
``` | |||
from api import demo_web_app | |||
demo_web_app(gpt, config) | |||
``` | |||
Save this python script to a file and run the file as you would normally run a Python file: | |||
`python path_to_file.py` | |||
in your shell. A web app should pop up in your browser in a few seconds, and you should be able to interact with your primed model. Please open any issues if you have questions! |
@@ -1,70 +0,0 @@ | |||
## Priming | |||
As smart as GPT-3 is, it still doesn't excel at most tasks out of the box. It | |||
benefits greatly from seeing a few examples, a process we like to refer to as "priming". | |||
Finding a set of examples which focuses GPT-3 on your specific use case will inevitably require | |||
a bit of trial-and-error. To make this step easier, we designed our GPT interface to allow for | |||
easy testing and exploration using the python interactive interpreter. Below we walk you through | |||
an example of how to do so, again using the English to LaTeX use case. | |||
First, open up your python interpreter by running `python` or `python3`. Next you'll need to | |||
import the necessary items from the `api` package. You'll need the `GPT` class, | |||
the `Example` class, and `set_openai_key`: | |||
``` | |||
>>> from api import GPT, Example, set_openai_key | |||
``` | |||
Next, you'll want to set your open ai key to gain access to the beta. | |||
``` | |||
>>> set_openai_key("YOUR_OPENAI_KEY") # omit the Bearer, it should look like "sk-..." | |||
``` | |||
Next, initialize your GPT class. You have the option of setting a few of the query | |||
parameters such as `engine` and `temperature`, but we'll just go with the default | |||
setup for simplicity. | |||
``` | |||
>>> gpt = GPT() | |||
``` | |||
Now we're ready to give it a prompt and see how it does! You can conveniently get | |||
GPT-3's response using the `get_top_reply` method. | |||
``` | |||
>>> print(gpt.get_top_reply("sum from one to infinity of one over n squared")) | |||
output: n squared over n | |||
``` | |||
Clearly this needs some priming. To prime, you call `add_example` on your `gpt` object, | |||
feeding it an instance of the `Example` class. Let's add a few examples and try again. | |||
``` | |||
>>> gpt.add_example(Example("four y plus three x cubed", "4y + 3x^3")) | |||
>>> gpt.add_example(Example("integral from a to b", "\\int_a^b")) | |||
>>> print(gpt.get_top_reply("sum from one to infinity of one over n squared")) | |||
output: 1/n^2 | |||
``` | |||
Better, but not quite there. Better, but not quite there. Let's give it an expression with a | |||
sum and then see what happens: | |||
``` | |||
>>> gpt.add_example(Example("sum from zero to twelve of i", "\\sum_{i=0}^5 i")) | |||
>>> print(gpt.get_top_reply("sum from one to infinity of one over n squared")) | |||
output: \sum_{n=1}^\infty \frac{1}{n^2} | |||
>>> print(gpt.get_top_reply("sum from one to infinity of one over two to the n")) | |||
output: \sum_{n=1}^\infty \frac{1}{2^n} | |||
``` | |||
Finally, it works! Now go and see what other crazy stuff you can do with GPT-3! | |||
@@ -1,32 +0,0 @@ | |||
import os | |||
import sys | |||
sys.path.append(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))) | |||
from api import GPT, Example, UIConfig | |||
from api import demo_web_app | |||
gpt = GPT(temperature=0.5, max_tokens=500) | |||
gpt.add_example(Example( | |||
"A boy named John was upset. His father found him crying.When his father asked John why he was crying, he said that he had a lot of problems in his life.His father simply smiled and asked him to get a potato, an egg, and some coffee beans. He placed them in three bowls.He then asked John to feel their texture and then fill each bowl with water.John did as he had been told. His father then boiled all three bowls.Once the bowls had cooled down, John’s father asked him to feel the texture of the different food items again.John noticed that the potato had become soft and its skin was peeling off easily; the egg had become harder and tougher; the coffee beans had completely changed and filled the bowl of water with aroma and flavour.", | |||
"Life will always have problems and pressures, like the boiling water in the story. It’s how you respond and react to these problems that counts the most!" | |||
)) | |||
gpt.add_example(Example( | |||
"Once upon a time in a circus, five elephants that performed circus tricks. They were kept tied up with weak rope that they could’ve easily escaped, but did not.One day, a man visiting the circus asked the ringmaster: “Why haven’t these elephants broken the rope and run away?”The ringmaster replied: “From when they were young, the elephants were made to believe that they were not strong enough to break the ropes and escape.”It was because of this belief that they did not even try to break the ropes now.", | |||
"Don’t give in to the limitations of society. Believe that you can achieve everything you want to!" | |||
)) | |||
gpt.add_example(Example( | |||
"A long time ago, there lived a king in Greece named Midas.He was extremely wealthy and had all the gold he could ever need. He also had a daughter whom he loved very much.One day, Midas saw a Satyr (an angel) who was stuck and was in trouble. Midas helped the Satyr and asked for his wish to be granted in return.The Satyr agreed and Midas wished for everything he touched to be turned to gold. His wish was granted.Extremely excited, Midas went home to his wife and daughter touching pebbles, rocks, and plants on the way, which turned into gold.As his daughter hugged him, she turned into a golden statue.Having learnt his lesson, Midas begged the Satyr to reverse the spell who granted that everything would go back to their original state.", | |||
"Stay content and grateful with what you have. Greed will not get you anywhere." | |||
)) | |||
config = UIConfig(description= "Describe the moral of the short story", | |||
button_text= "Get Moral", | |||
placeholder= "This popular story is about a hare (an animal belonging to the rabbit family), which is known to move quickly and a tortoise, which is known to move slower.The story began when the hare who has won many races proposed a race with the tortoise. The hare simply wanted to prove that he was the best and have the satisfaction of beating him.The tortoise agreed and the race began.The hare got a head-start but became overconfident towards the end of the race. His ego made him believe that he could win the race even if he rested for a while.And so, he took a nap right near the finish line.Meanwhile, the tortoise walked slowly but extremely determined and dedicated. He did not give up for a second and kept persevering despite the odds not being in his favour.While the hare was asleep, the tortoise crossed the finish line and won the race!The best part was that the tortoise did not gloat or put the hare down!") | |||
demo_web_app(gpt, config) |
@@ -1,34 +0,0 @@ | |||
"""Idea taken from https://www.notion.so/Analogies-Generator-9b046963f52f446b9bef84aa4e416a4c""" | |||
import os | |||
import sys | |||
sys.path.append(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))) | |||
from api import GPT, Example, UIConfig | |||
from api import demo_web_app | |||
# Construct GPT object and show some examples | |||
gpt = GPT(engine="davinci", | |||
temperature=0.5, | |||
max_tokens=100) | |||
gpt.add_example(Example('Neural networks are like', | |||
'genetic algorithms in that both are systems that learn from experience.')) | |||
gpt.add_example(Example('Social media is like', | |||
'a market in that both are systems that coordinate the actions of many individuals.')) | |||
gpt.add_example(Example( | |||
'A2E is like', 'lipofuscin in that both are byproducts of the normal operation of a system.')) | |||
gpt.add_example(Example('Haskell is like', | |||
'LISP in that both are functional languages.')) | |||
gpt.add_example(Example('Quaternions are like', | |||
'matrices in that both are used to represent rotations in three dimensions.')) | |||
gpt.add_example(Example('Quaternions are like', | |||
'octonions in that both are examples of non-commutative algebra.')) | |||
# Define UI configuration | |||
config = UIConfig(description="Analogies generator", | |||
button_text="Generate", | |||
placeholder="Memes are like") | |||
demo_web_app(gpt, config) |
@@ -1,24 +0,0 @@ | |||
import os | |||
import sys | |||
sys.path.append(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))) | |||
from api import GPT, Example, UIConfig | |||
from api import demo_web_app | |||
# Construct GPT object and show some examples | |||
gpt = GPT(engine="davinci", temperature=0.5, max_tokens=100) | |||
gpt.add_example(Example("Who are you?", "I'm an example.")) | |||
gpt.add_example(Example("What are you?", "I'm an example.")) | |||
# Define UI configuration | |||
config = UIConfig( | |||
description="Prompt", | |||
button_text="Result", | |||
placeholder="Where are you?", | |||
show_example_form=True, | |||
) | |||
demo_web_app(gpt, config) |
@@ -1,33 +0,0 @@ | |||
"""Idea taken from https://www.notion.so/Sentence-Email-Generator-a36d269ce8e94cc58daf723f8ba8fe3e""" | |||
import os | |||
import sys | |||
sys.path.append(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))) | |||
from api import GPT, Example, UIConfig | |||
from api import demo_web_app | |||
# Construct GPT object and show some examples | |||
gpt = GPT(engine="davinci", | |||
temperature=0.4, | |||
max_tokens=60) | |||
gpt.add_example(Example('Thank John for the book.', | |||
'Dear John, Thank you so much for the book. I really appreciate it. I hope to hang out soon. Your friend, Sarah.')) | |||
gpt.add_example(Example('Tell TechCorp I appreciate the great service.', | |||
'To Whom it May Concern, I want you to know that I appreciate the great service at TechCorp. The staff is outstanding and I enjoy every visit. Sincerely, Bill Johnson')) | |||
gpt.add_example(Example('Invoice Kelly Watkins $500 for design consultation.', | |||
'Dear Ms. Watkins, This is my invoice for $500 for design consultation. It was a pleasure to work with you. Sincerely, Emily Fields')) | |||
gpt.add_example(Example('Invite Amanda and Paul to the company event Friday night.', | |||
'Dear Amanda and Paul, I hope this finds you doing well. I want to invite you to our company event on Friday night. It will be a great opportunity for networking and there will be food and drinks. Should be fun. Best, Ryan')) | |||
# Define UI configuration | |||
config = UIConfig(description="Command to email generator", | |||
button_text="Generate", | |||
placeholder="Ask RAM Co. if they have new storage units in stock.") | |||
demo_web_app(gpt, config) |
@@ -1,42 +0,0 @@ | |||
import os | |||
import sys | |||
sys.path.append(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))) | |||
from api import demo_web_app | |||
from api import GPT, Example, UIConfig | |||
question_prefix = 'Q: ' | |||
question_suffix = "\n" | |||
answer_prefix = "A: " | |||
answer_suffix = "\n\n" | |||
# Construct GPT object and show some examples | |||
gpt = GPT(engine="davinci", | |||
temperature=0.5, | |||
max_tokens=100, | |||
input_prefix=question_prefix, | |||
input_suffix=question_suffix, | |||
output_prefix=answer_prefix, | |||
output_suffix=answer_suffix, | |||
append_output_prefix_to_query=True) | |||
gpt.add_example(Example('What is human life expectancy in the United States?', | |||
'Human life expectancy in the United States is 78 years.')) | |||
gpt.add_example( | |||
Example('Who was president of the United States in 1955?', 'Dwight D. Eisenhower was president of the United States in 1955.')) | |||
gpt.add_example(Example( | |||
'What party did he belong to?', 'He belonged to the Republican Party.')) | |||
gpt.add_example(Example('Who was president of the United States before George W. Bush?', | |||
'Bill Clinton was president of the United States before George W. Bush.')) | |||
gpt.add_example(Example('In what year was the Coronation of Queen Elizabeth?', | |||
'The Coronation of Queen Elizabeth was in 1953.')) | |||
# Define UI configuration | |||
config = UIConfig(description="Question to Answer", | |||
button_text="Answer", | |||
placeholder="Who wrote the song 'Hey Jude'?") | |||
demo_web_app(gpt, config) |
@@ -1,35 +0,0 @@ | |||
import os | |||
import sys | |||
sys.path.append(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))) | |||
from api import GPT, Example, UIConfig | |||
from api import demo_web_app | |||
# Construct GPT object and show some examples | |||
gpt = GPT(engine="davinci", | |||
temperature=0.5, | |||
max_tokens=100) | |||
gpt.add_example(Example('Two plus two equals four', '2 + 2 = 4')) | |||
gpt.add_example( | |||
Example('The integral from zero to infinity', '\\int_0^{\\infty}')) | |||
gpt.add_example(Example( | |||
'The gradient of x squared plus two times x with respect to x', '\\nabla_x x^2 + 2x')) | |||
gpt.add_example(Example('The log of two times x', '\\log{2x}')) | |||
gpt.add_example( | |||
Example('x squared plus y squared plus equals z squared', 'x^2 + y^2 = z^2')) | |||
gpt.add_example( | |||
Example('The sum from zero to twelve of i squared', '\\sum_{i=0}^{12} i^2')) | |||
gpt.add_example(Example('E equals m times c squared', 'E = mc^2')) | |||
gpt.add_example(Example('H naught of t', 'H_0(t)')) | |||
gpt.add_example(Example('f of n equals 1 over (b-a) if n is 0 otherwise 5', | |||
'f(n) = \\begin{cases} 1/(b-a) &\\mbox{if } n \\equiv 0 \\\ # 5 \\end{cases}')) | |||
# Define UI configuration | |||
config = UIConfig(description="Text to equation", | |||
button_text="Translate", | |||
placeholder="x squared plus 2 times x", | |||
show_example_form=True) | |||
demo_web_app(gpt, config) |
@@ -1,35 +0,0 @@ | |||
import os | |||
import sys | |||
sys.path.append(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))) | |||
from api import GPT, Example, UIConfig | |||
from api import demo_web_app | |||
gpt = GPT(temperature=0.5, max_tokens=500) | |||
gpt.add_example(Example( | |||
"how to roast eggplant", | |||
"How do you cook eggplant in the oven? Well, there are a couple ways. To roast whole eggplants in the oven, leave the skin on and roast at 400 degrees F (200 degrees C) until the skin gets wrinkly and begins to collapse in on the softened fruit. This method will also produce velvety smooth eggplant dips or spreads." | |||
)) | |||
gpt.add_example(Example( | |||
"how to bake eggplant", | |||
"To bake eggplant, you'll cut the eggplant into rounds or strips and prepare them as the recipe indicates -- for example, you can dredge them in egg and breadcrumbs or simply brush them with olive oil and bake them in a 350 degree F oven." | |||
)) | |||
gpt.add_example(Example( | |||
"how to make puerto rican steamed rice", | |||
"Bring vegetable oil, water, and salt to a boil in a saucepan over high heat. Add rice, and cook until the water has just about cooked out; stir. Reduce heat to medium-low. Cover, and cook for 20 to 25 minutes. Stir again, and serve. Rice may be a little sticky and may stick to bottom of pot." | |||
)) | |||
gpt.add_example(Example( | |||
"how to make oatmeal peanut butter cookies", | |||
"Preheat oven to 350 degrees F (175 degrees C). In a large bowl, cream together shortening, margarine, brown sugar, white sugar, and peanut butter until smooth. Beat in the eggs one at a time until well blended. Combine the flour, baking soda, and salt; stir into the creamed mixture. Mix in the oats until just combined. Drop by teaspoonfuls onto ungreased cookie sheets. Bake for 10 to 15 minutes in the preheated oven, or until just light brown. Don't over-bake. Cool and store in an airtight container." | |||
)) | |||
config = UIConfig(description= "How to cook stuff", | |||
button_text= "show me", | |||
placeholder= "how to make a breakfast burrito") | |||
demo_web_app(gpt, config) |
@@ -1,31 +0,0 @@ | |||
import os | |||
import sys | |||
sys.path.append(os.path.abspath(os.curdir)) | |||
from api import UIConfig | |||
from api import demo_web_app | |||
from api.inspurai import Yuan, set_yuan_account, Example | |||
# 1. set account | |||
set_yuan_account("account", "phone Num.") # 输入您申请的账号和手机号 | |||
# 2. initiate yuan api | |||
# 注意:engine必需是['base_10B','translate','dialog']之一,'base_10B'是基础模型,'translate'是翻译模型,'dialog'是对话模型 | |||
yuan = Yuan(engine='dialog', | |||
input_prefix="问:“", | |||
input_suffix="”", | |||
output_prefix="答:“", | |||
output_suffix="”", | |||
append_output_prefix_to_query=True, | |||
frequencyPenalty=1.2) | |||
# 3. add examples if in need. | |||
yuan.add_example(Example(inp="对百雅轩798艺术中心有了解吗?", | |||
out="有些了解,它位于北京798艺术区,创办于2003年。")) | |||
# Define UI configuration | |||
config = UIConfig(description="旅游问答机器人", | |||
button_text="回答", | |||
placeholder="故宫里有什么好玩的?", | |||
show_example_form=True) | |||
demo_web_app(yuan, config) |
@@ -1,47 +0,0 @@ | |||
import os | |||
import sys | |||
sys.path.append(os.path.abspath(os.curdir)) | |||
from api import UIConfig | |||
from api import demo_web_app | |||
from api.inspurai import Yuan, set_yuan_account,Example | |||
# 1. set account | |||
set_yuan_account("account", "phone Num.") # 输入您申请的账号和手机号 | |||
# 2. initiate yuan api | |||
yuan = Yuan(input_prefix="以", | |||
input_suffix="为题作一首诗:", | |||
output_prefix="答:", | |||
output_suffix="”", | |||
append_output_prefix_to_query=False, | |||
max_tokens=40) | |||
# 3. add examples if in need. | |||
yuan.add_example(Example(inp="清风", | |||
out="春风用意匀颜色,销得携觞与赋诗。秾丽最宜新著雨,娇饶全在欲开时。")) | |||
yuan.add_example(Example(inp="送别", | |||
out="渭城朝雨浥轻尘,客舍青青柳色新。劝君更尽一杯酒,西出阳关无故人。")) | |||
yuan.add_example(Example(inp="新年", | |||
out="欢乐过新年,烟花灿九天。金龙腾玉宇,六出好耘田。")) | |||
yuan.add_example(Example(inp="喜悦", | |||
out="昔日龌龊不足夸,今朝放荡思无涯。春风得意马蹄疾,一日看尽长安花。")) | |||
# print("====作诗机器人====") | |||
# Define UI configuration | |||
config = UIConfig(description="作诗机器人", | |||
button_text="作诗", | |||
# placeholder="以何为题作诗:", | |||
placeholder="田园", | |||
show_example_form=True) | |||
demo_web_app(yuan, config) | |||
# while(1): | |||
# print("输入Q退出") | |||
# prompt = input("以何为题作诗:") | |||
# if prompt.lower() == "q": | |||
# break | |||
# response = yuan.submit_API(prompt=prompt,trun="”") | |||
# print(response+"”") |
@@ -1,118 +0,0 @@ | |||
# GPT-3 Sandbox: Turn your ideas into demos in a matter of minutes | |||
Initial release date: 19 July 2020 | |||
Note that this repository is not under any active development; just basic maintenance. | |||
## Description | |||
The goal of this project is to enable users to create cool web demos using the newly released OpenAI GPT-3 API **with just a few lines of Python.** | |||
This project addresses the following issues: | |||
1. Automatically formatting a user's inputs and outputs so that the model can effectively pattern-match | |||
2. Creating a web app for a user to deploy locally and showcase their idea | |||
Here's a quick example of priming GPT to convert English to LaTeX: | |||
``` | |||
# Construct GPT object and show some examples | |||
gpt = GPT(engine="davinci", | |||
temperature=0.5, | |||
max_tokens=100) | |||
gpt.add_example(Example('Two plus two equals four', '2 + 2 = 4')) | |||
gpt.add_example(Example('The integral from zero to infinity', '\\int_0^{\\infty}')) | |||
gpt.add_example(Example('The gradient of x squared plus two times x with respect to x', '\\nabla_x x^2 + 2x')) | |||
gpt.add_example(Example('The log of two times x', '\\log{2x}')) | |||
gpt.add_example(Example('x squared plus y squared plus equals z squared', 'x^2 + y^2 = z^2')) | |||
# Define UI configuration | |||
config = UIConfig(description="Text to equation", | |||
button_text="Translate", | |||
placeholder="x squared plus 2 times x") | |||
demo_web_app(gpt, config) | |||
``` | |||
Running this code as a python script would automatically launch a web app for you to test new inputs and outputs with. There are already 3 example scripts in the `examples` directory. | |||
You can also prime GPT from the UI. for that, pass `show_example_form=True` to `UIConfig` along with other parameters. | |||
Technical details: the backend is in Flask, and the frontend is in React. Note that this repository is currently not intended for production use. | |||
## Background | |||
GPT-3 ([Brown et al.](https://arxiv.org/abs/2005.14165)) is OpenAI's latest language model. It incrementally builds on model architectures designed in [previous](https://arxiv.org/abs/1706.03762) [research](https://arxiv.org/abs/1810.04805) studies, but its key advance is that it's extremely good at "few-shot" learning. There's a [lot](https://twitter.com/sharifshameem/status/1282676454690451457) [it](https://twitter.com/jsngr/status/1284511080715362304?s=20) [can](https://twitter.com/paraschopra/status/1284801028676653060?s=20) [do](https://www.gwern.net/GPT-3), but one of the biggest pain points is in "priming," or seeding, the model with some inputs such that the model can intelligently create new outputs. Many people have ideas for GPT-3 but struggle to make them work, since priming is a new paradigm of machine learning. Additionally, it takes a nontrivial amount of web development to spin up a demo to showcase a cool idea. We built this project to make our own idea generation easier to experiment with. | |||
This [developer toolkit](https://www.notion.so/API-Developer-Toolkit-49595ed6ffcd413e93ebff10d7e70fe7) has some great resources for those experimenting with the API, including sample prompts. | |||
## Requirements | |||
Coding-wise, you only need Python. But for the app to run, you will need: | |||
* API key from the OpenAI API beta invite | |||
* Python 3 | |||
* `yarn` | |||
Instructions to install Python 3 are [here](https://realpython.com/installing-python/), and instructions to install `yarn` are [here](https://classic.yarnpkg.com/en/docs/install/#mac-stable). | |||
## Setup | |||
First, clone or fork this repository. Then to set up your virtual environment, do the following: | |||
1. Create a virtual environment in the root directory: `python -m venv $ENV_NAME` | |||
2. Activate the virtual environment: ` source $ENV_NAME/bin/activate` (for MacOS, Unix, or Linux users) or ` .\ENV_NAME\Scripts\activate` (for Windows users) | |||
3. Install requirements: `pip install -r api/requirements.txt` | |||
4. To add your secret key: create a file anywhere on your computer called `openai.cfg` with the contents `OPENAI_KEY=$YOUR_SECRET_KEY`, where `$YOUR_SECRET_KEY` looks something like `'sk-somerandomcharacters'` (including quotes). If you are unsure what your secret key is, navigate to the [API docs](https://beta.openai.com/developer-quickstart) and copy the token displayed next to the "secret" key type. | |||
5. Set your environment variable to read the secret key: run `export OPENAI_CONFIG=/path/to/config/openai.cfg` (for MacOS, Unix, or Linux users) or `set OPENAI_CONFIG=/path/to/config/openai.cfg` (for Windows users) | |||
6. Run `yarn install` in the root directory | |||
If you are a Windows user, to run the demos, you will need to modify the following line inside `api/demo_web_app.py`: | |||
`subprocess.Popen(["yarn", "start"])` to `subprocess.Popen(["yarn", "start"], shell=True)` | |||
To verify that your environment is set up properly, run one of the 3 scripts in the `examples` directory: | |||
`python examples/run_latex_app.py` | |||
A new tab should pop up in your browser, and you should be able to interact with the UI! To stop this app, run ctrl-c or command-c in your terminal. | |||
To create your own example, check out the ["getting started" docs](https://github.com/shreyashankar/gpt3-sandbox/blob/master/docs/getting-started.md). | |||
## Interactive Priming | |||
The real power of GPT-3 is in its ability to learn to specialize to tasks given a few examples. However, priming can at times be more of an art than a science. Using the GPT and Example classes, you can easily experiment with different priming examples and immediately see their GPT on GPT-3's performance. Below is an example showing it improve incrementally at translating English to LaTeX as we feed it more examples in the python interpreter: | |||
``` | |||
>>> from api import GPT, Example, set_openai_key | |||
>>> gpt = GPT() | |||
>>> set_openai_key(key) | |||
>>> prompt = "integral from a to b of f of x" | |||
>>> print(gpt.get_top_reply(prompt)) | |||
output: integral from a to be of f of x | |||
>>> gpt.add_example(Example("Two plus two equals four", "2 + 2 = 4")) | |||
>>> print(gpt.get_top_reply(prompt)) | |||
output: | |||
>>> gpt.add_example(Example('The integral from zero to infinity', '\\int_0^{\\infty}')) | |||
>>> print(gpt.get_top_reply(prompt)) | |||
output: \int_a^b f(x) dx | |||
``` | |||
## Contributions | |||
We actively encourage people to contribute by adding their own examples or even adding functionalities to the modules. Please make a pull request if you would like to add something, or create an issue if you have a question. We will update the contributors list on a regular basis. | |||
Please *do not* leave your secret key in plaintext in your pull request! | |||
## Authors | |||
The following authors have committed 20 lines or more (ordered according to the Github contributors page): | |||
* Shreya Shankar | |||
* Bora Uyumazturk | |||
* Devin Stein | |||
* Gulan | |||
* Michael Lavelle | |||
@@ -1,42 +0,0 @@ | |||
{ | |||
"name": "yuan-sandbox", | |||
"version": "0.1.0", | |||
"private": true, | |||
"dependencies": { | |||
"@testing-library/jest-dom": "^4.2.4", | |||
"@testing-library/react": "^9.3.2", | |||
"@testing-library/user-event": "^7.1.2", | |||
"axios": "^0.21.1", | |||
"bootstrap": "^4.5.0", | |||
"lodash": "^4.17.21", | |||
"react": "^16.13.1", | |||
"react-bootstrap": "^1.2.2", | |||
"react-dom": "^16.13.1", | |||
"react-latex": "^2.0.0", | |||
"react-latex-next": "^1.2.0", | |||
"react-scripts": "^3.4.1" | |||
}, | |||
"scripts": { | |||
"start": "react-scripts start", | |||
"start-api": "cd api && venv/bin/flask run --no-debugger", | |||
"build": "react-scripts build", | |||
"test": "react-scripts test", | |||
"eject": "react-scripts eject" | |||
}, | |||
"eslintConfig": { | |||
"extends": "react-app" | |||
}, | |||
"browserslist": { | |||
"production": [ | |||
">0.2%", | |||
"not dead", | |||
"not op_mini all" | |||
], | |||
"development": [ | |||
"last 1 chrome version", | |||
"last 1 firefox version", | |||
"last 1 safari version" | |||
] | |||
}, | |||
"proxy": "http://localhost:5000" | |||
} |
@@ -1,38 +0,0 @@ | |||
<!DOCTYPE html> | |||
<html lang="en"> | |||
<head> | |||
<meta charset="utf-8" /> | |||
<meta name="viewport" content="width=device-width, initial-scale=1" /> | |||
<meta name="theme-color" content="#000000" /> | |||
<meta | |||
name="description" | |||
content="Web site to demo Yuan1.0 idea." | |||
/> | |||
<!-- | |||
manifest.json provides metadata used when your web app is installed on a | |||
user's mobile device or desktop. See https://developers.google.com/web/fundamentals/web-app-manifest/ | |||
--> | |||
<link rel="manifest" href="%PUBLIC_URL%/manifest.json" /> | |||
<!-- | |||
Notice the use of %PUBLIC_URL% in the tags above. | |||
It will be replaced with the URL of the `public` folder during the build. | |||
Only files inside the `public` folder can be referenced from the HTML. | |||
Unlike "/favicon.ico" or "favicon.ico", "%PUBLIC_URL%/favicon.ico" will | |||
work correctly both with client-side routing and a non-root public URL. | |||
Learn how to configure a non-root public URL by running `npm run build`. | |||
--> | |||
<title>Yuan1.0 Sandbox</title> | |||
</head> | |||
<body> | |||
<noscript>You need to enable JavaScript to run this app.</noscript> | |||
<div id="root"></div> | |||
<!-- | |||
This HTML file is a template. | |||
If you open it directly in the browser, you will see an empty page. | |||
You can add webfonts, meta tags, or analytics to this file. | |||
The build step will place the bundled scripts into the <body> tag. | |||
To begin the development, run `npm start` or `yarn start`. | |||
To create a production bundle, use `npm run build` or `yarn build`. | |||
--> | |||
</body> | |||
</html> |
@@ -1,9 +0,0 @@ | |||
{ | |||
"short_name": "Yuan1.0 Sandbox", | |||
"name": "Yuan1.0 Sandbox", | |||
"icons": [], | |||
"start_url": ".", | |||
"display": "standalone", | |||
"theme_color": "#000000", | |||
"background_color": "#ffffff" | |||
} |
@@ -1,3 +0,0 @@ | |||
# https://www.robotstxt.org/robotstxt.html | |||
User-agent: * | |||
Disallow: |
@@ -1,38 +0,0 @@ | |||
.App { | |||
text-align: center; | |||
} | |||
.App-logo { | |||
height: 40vmin; | |||
pointer-events: none; | |||
} | |||
@media (prefers-reduced-motion: no-preference) { | |||
.App-logo { | |||
animation: App-logo-spin infinite 20s linear; | |||
} | |||
} | |||
.App-header { | |||
background-color: #282c34; | |||
min-height: 100vh; | |||
display: flex; | |||
flex-direction: column; | |||
align-items: center; | |||
justify-content: center; | |||
font-size: calc(10px + 2vmin); | |||
color: white; | |||
} | |||
.App-link { | |||
color: #61dafb; | |||
} | |||
@keyframes App-logo-spin { | |||
from { | |||
transform: rotate(0deg); | |||
} | |||
to { | |||
transform: rotate(360deg); | |||
} | |||
} |
@@ -1,218 +0,0 @@ | |||
import React from "react"; | |||
import { Form, Button, Row, Col } from "react-bootstrap"; | |||
import axios from "axios"; | |||
import { debounce } from "lodash"; | |||
import "bootstrap/dist/css/bootstrap.min.css"; | |||
const UI_PARAMS_API_URL = "/params"; | |||
const TRANSLATE_API_URL = "/translate"; | |||
const EXAMPLE_API_URL = "/examples"; | |||
const DEBOUNCE_INPUT = 250; | |||
class App extends React.Component { | |||
constructor(props) { | |||
super(props); | |||
this.state = { | |||
output: "", | |||
input: "", | |||
buttonText: "Submit", | |||
description: "Description", | |||
showExampleForm: false, | |||
examples: {} | |||
}; | |||
// Bind the event handlers | |||
this.handleInputChange = this.handleInputChange.bind(this); | |||
this.handleClick = this.handleClick.bind(this); | |||
} | |||
componentDidMount() { | |||
// Call API for the UI params | |||
axios | |||
.get(UI_PARAMS_API_URL) | |||
.then( | |||
({ | |||
data: { placeholder, button_text, description, show_example_form } | |||
}) => { | |||
this.setState({ | |||
input: placeholder, | |||
buttonText: button_text, | |||
description: description, | |||
showExampleForm: show_example_form | |||
}); | |||
if (this.state.showExampleForm) { | |||
axios.get(EXAMPLE_API_URL).then(({ data: examples }) => { | |||
this.setState({ examples }); | |||
}); | |||
} | |||
} | |||
); | |||
} | |||
updateExample(id, body) { | |||
axios.put(`${EXAMPLE_API_URL}/${id}`, body); | |||
} | |||
debouncedUpdateExample = debounce(this.updateExample, DEBOUNCE_INPUT); | |||
handleExampleChange = (id, field) => e => { | |||
const text = e.target.value; | |||
let body = { [field]: text }; | |||
let examples = { ...this.state.examples }; | |||
examples[id][field] = text; | |||
this.setState({ examples }); | |||
this.debouncedUpdateExample(id, body); | |||
}; | |||
handleExampleDelete = id => e => { | |||
e.preventDefault(); | |||
axios.delete(`${EXAMPLE_API_URL}/${id}`).then(({ data: examples }) => { | |||
this.setState({ examples }); | |||
}); | |||
}; | |||
handleExampleAdd = e => { | |||
e.preventDefault(); | |||
axios.post(EXAMPLE_API_URL).then(({ data: examples }) => { | |||
this.setState({ examples }); | |||
}); | |||
}; | |||
handleInputChange(e) { | |||
this.setState({ input: e.target.value }); | |||
} | |||
handleClick(e) { | |||
e.preventDefault(); | |||
let body = { | |||
prompt: this.state.input | |||
}; | |||
axios.post(TRANSLATE_API_URL, body).then(({ data: { text } }) => { | |||
this.setState({ output: text }); | |||
}); | |||
} | |||
render() { | |||
const showExampleForm = this.state.showExampleForm; | |||
return ( | |||
<div> | |||
<head /> | |||
<body style={{ alignItems: "center", justifyContent: "center" }}> | |||
<div | |||
style={{ | |||
margin: "auto", | |||
marginTop: "80px", | |||
display: "block", | |||
maxWidth: "500px", | |||
minWidth: "200px", | |||
width: "50%" | |||
}} | |||
> | |||
<Form onSubmit={this.handleClick}> | |||
<Form.Group controlId="formBasicEmail"> | |||
{showExampleForm && ( | |||
<div> | |||
<h4 style={{ marginBottom: "25px" }}>Examples</h4> | |||
{Object.values(this.state.examples).map(example => ( | |||
<span key={example.id}> | |||
<Form.Group | |||
as={Row} | |||
controlId={"formExampleInput" + example.id} | |||
> | |||
<Form.Label column="sm" lg={2}> | |||
Example Input | |||
</Form.Label> | |||
<Col sm={10}> | |||
<Form.Control | |||
type="text" | |||
as="input" | |||
placeholder="Enter text" | |||
value={example.input} | |||
onChange={this.handleExampleChange( | |||
example.id, | |||
"input" | |||
)} | |||
/> | |||
</Col> | |||
</Form.Group> | |||
<Form.Group | |||
as={Row} | |||
controlId={"formExampleOutput" + example.id} | |||
> | |||
<Form.Label column="sm" lg={2}> | |||
Example Output | |||
</Form.Label> | |||
<Col sm={10}> | |||
<Form.Control | |||
type="text" | |||
as="textarea" | |||
placeholder="Enter text" | |||
value={example.output} | |||
onChange={this.handleExampleChange( | |||
example.id, | |||
"output" | |||
)} | |||
/> | |||
</Col> | |||
</Form.Group> | |||
<Form.Group as={Row}> | |||
<Col sm={{ span: 10, offset: 2 }}> | |||
<Button | |||
type="button" | |||
size="sm" | |||
variant="danger" | |||
onClick={this.handleExampleDelete(example.id)} | |||
> | |||
Delete example | |||
</Button> | |||
</Col> | |||
</Form.Group> | |||
</span> | |||
))} | |||
<Form.Group as={Row}> | |||
<Col sm={{ span: 10 }}> | |||
<Button | |||
type="button" | |||
variant="primary" | |||
onClick={this.handleExampleAdd} | |||
> | |||
Add example | |||
</Button> | |||
</Col> | |||
</Form.Group> | |||
</div> | |||
)} | |||
<Form.Label>{this.state.description}</Form.Label> | |||
<Form.Control | |||
type="text" | |||
as="textarea" | |||
placeholder="Enter text" | |||
value={this.state.input} | |||
onChange={this.handleInputChange} | |||
/> | |||
</Form.Group> | |||
<Button variant="primary" type="submit"> | |||
{this.state.buttonText} | |||
</Button> | |||
</Form> | |||
<div | |||
style={{ | |||
textAlign: "center", | |||
margin: "20px", | |||
fontSize: "18pt" | |||
}} | |||
> | |||
{this.state.output} | |||
</div> | |||
</div> | |||
</body> | |||
</div> | |||
); | |||
} | |||
} | |||
export default App; |
@@ -1,9 +0,0 @@ | |||
import React from 'react'; | |||
import { render } from '@testing-library/react'; | |||
import App from './App'; | |||
test('renders learn react link', () => { | |||
const { getByText } = render(<App />); | |||
const linkElement = getByText(/learn react/i); | |||
expect(linkElement).toBeInTheDocument(); | |||
}); |
@@ -1,13 +0,0 @@ | |||
body { | |||
margin: 0; | |||
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen', | |||
'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue', | |||
sans-serif; | |||
-webkit-font-smoothing: antialiased; | |||
-moz-osx-font-smoothing: grayscale; | |||
} | |||
code { | |||
font-family: source-code-pro, Menlo, Monaco, Consolas, 'Courier New', | |||
monospace; | |||
} |
@@ -1,17 +0,0 @@ | |||
import React from 'react'; | |||
import ReactDOM from 'react-dom'; | |||
import './index.css'; | |||
import App from './App'; | |||
import * as serviceWorker from './serviceWorker'; | |||
ReactDOM.render( | |||
<React.StrictMode> | |||
<App /> | |||
</React.StrictMode>, | |||
document.getElementById('root') | |||
); | |||
// If you want your app to work offline and load faster, you can change | |||
// unregister() to register() below. Note this comes with some pitfalls. | |||
// Learn more about service workers: https://bit.ly/CRA-PWA | |||
serviceWorker.unregister(); |
@@ -1,7 +0,0 @@ | |||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 841.9 595.3"> | |||
<g fill="#61DAFB"> | |||
<path d="M666.3 296.5c0-32.5-40.7-63.3-103.1-82.4 14.4-63.6 8-114.2-20.2-130.4-6.5-3.8-14.1-5.6-22.4-5.6v22.3c4.6 0 8.3.9 11.4 2.6 13.6 7.8 19.5 37.5 14.9 75.7-1.1 9.4-2.9 19.3-5.1 29.4-19.6-4.8-41-8.5-63.5-10.9-13.5-18.5-27.5-35.3-41.6-50 32.6-30.3 63.2-46.9 84-46.9V78c-27.5 0-63.5 19.6-99.9 53.6-36.4-33.8-72.4-53.2-99.9-53.2v22.3c20.7 0 51.4 16.5 84 46.6-14 14.7-28 31.4-41.3 49.9-22.6 2.4-44 6.1-63.6 11-2.3-10-4-19.7-5.2-29-4.7-38.2 1.1-67.9 14.6-75.8 3-1.8 6.9-2.6 11.5-2.6V78.5c-8.4 0-16 1.8-22.6 5.6-28.1 16.2-34.4 66.7-19.9 130.1-62.2 19.2-102.7 49.9-102.7 82.3 0 32.5 40.7 63.3 103.1 82.4-14.4 63.6-8 114.2 20.2 130.4 6.5 3.8 14.1 5.6 22.5 5.6 27.5 0 63.5-19.6 99.9-53.6 36.4 33.8 72.4 53.2 99.9 53.2 8.4 0 16-1.8 22.6-5.6 28.1-16.2 34.4-66.7 19.9-130.1 62-19.1 102.5-49.9 102.5-82.3zm-130.2-66.7c-3.7 12.9-8.3 26.2-13.5 39.5-4.1-8-8.4-16-13.1-24-4.6-8-9.5-15.8-14.4-23.4 14.2 2.1 27.9 4.7 41 7.9zm-45.8 106.5c-7.8 13.5-15.8 26.3-24.1 38.2-14.9 1.3-30 2-45.2 2-15.1 0-30.2-.7-45-1.9-8.3-11.9-16.4-24.6-24.2-38-7.6-13.1-14.5-26.4-20.8-39.8 6.2-13.4 13.2-26.8 20.7-39.9 7.8-13.5 15.8-26.3 24.1-38.2 14.9-1.3 30-2 45.2-2 15.1 0 30.2.7 45 1.9 8.3 11.9 16.4 24.6 24.2 38 7.6 13.1 14.5 26.4 20.8 39.8-6.3 13.4-13.2 26.8-20.7 39.9zm32.3-13c5.4 13.4 10 26.8 13.8 39.8-13.1 3.2-26.9 5.9-41.2 8 4.9-7.7 9.8-15.6 14.4-23.7 4.6-8 8.9-16.1 13-24.1zM421.2 430c-9.3-9.6-18.6-20.3-27.8-32 9 .4 18.2.7 27.5.7 9.4 0 18.7-.2 27.8-.7-9 11.7-18.3 22.4-27.5 32zm-74.4-58.9c-14.2-2.1-27.9-4.7-41-7.9 3.7-12.9 8.3-26.2 13.5-39.5 4.1 8 8.4 16 13.1 24 4.7 8 9.5 15.8 14.4 23.4zM420.7 163c9.3 9.6 18.6 20.3 27.8 32-9-.4-18.2-.7-27.5-.7-9.4 0-18.7.2-27.8.7 9-11.7 18.3-22.4 27.5-32zm-74 58.9c-4.9 7.7-9.8 15.6-14.4 23.7-4.6 8-8.9 16-13 24-5.4-13.4-10-26.8-13.8-39.8 13.1-3.1 26.9-5.8 41.2-7.9zm-90.5 125.2c-35.4-15.1-58.3-34.9-58.3-50.6 0-15.7 22.9-35.6 58.3-50.6 8.6-3.7 18-7 27.7-10.1 5.7 19.6 13.2 40 22.5 60.9-9.2 20.8-16.6 41.1-22.2 60.6-9.9-3.1-19.3-6.5-28-10.2zM310 490c-13.6-7.8-19.5-37.5-14.9-75.7 1.1-9.4 2.9-19.3 5.1-29.4 19.6 4.8 41 8.5 63.5 10.9 13.5 18.5 27.5 35.3 41.6 50-32.6 30.3-63.2 46.9-84 46.9-4.5-.1-8.3-1-11.3-2.7zm237.2-76.2c4.7 38.2-1.1 67.9-14.6 75.8-3 1.8-6.9 2.6-11.5 2.6-20.7 0-51.4-16.5-84-46.6 14-14.7 28-31.4 41.3-49.9 22.6-2.4 44-6.1 63.6-11 2.3 10.1 4.1 19.8 5.2 29.1zm38.5-66.7c-8.6 3.7-18 7-27.7 10.1-5.7-19.6-13.2-40-22.5-60.9 9.2-20.8 16.6-41.1 22.2-60.6 9.9 3.1 19.3 6.5 28.1 10.2 35.4 15.1 58.3 34.9 58.3 50.6-.1 15.7-23 35.6-58.4 50.6zM320.8 78.4z"/> | |||
<circle cx="420.9" cy="296.5" r="45.7"/> | |||
<path d="M520.5 78.1z"/> | |||
</g> | |||
</svg> |
@@ -1,141 +0,0 @@ | |||
// This optional code is used to register a service worker. | |||
// register() is not called by default. | |||
// This lets the app load faster on subsequent visits in production, and gives | |||
// it offline capabilities. However, it also means that developers (and users) | |||
// will only see deployed updates on subsequent visits to a page, after all the | |||
// existing tabs open on the page have been closed, since previously cached | |||
// resources are updated in the background. | |||
// To learn more about the benefits of this model and instructions on how to | |||
// opt-in, read https://bit.ly/CRA-PWA | |||
const isLocalhost = Boolean( | |||
window.location.hostname === 'localhost' || | |||
// [::1] is the IPv6 localhost address. | |||
window.location.hostname === '[::1]' || | |||
// 127.0.0.0/8 are considered localhost for IPv4. | |||
window.location.hostname.match( | |||
/^127(?:\.(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)){3}$/ | |||
) | |||
); | |||
export function register(config) { | |||
if (process.env.NODE_ENV === 'production' && 'serviceWorker' in navigator) { | |||
// The URL constructor is available in all browsers that support SW. | |||
const publicUrl = new URL(process.env.PUBLIC_URL, window.location.href); | |||
if (publicUrl.origin !== window.location.origin) { | |||
// Our service worker won't work if PUBLIC_URL is on a different origin | |||
// from what our page is served on. This might happen if a CDN is used to | |||
// serve assets; see https://github.com/facebook/create-react-app/issues/2374 | |||
return; | |||
} | |||
window.addEventListener('load', () => { | |||
const swUrl = `${process.env.PUBLIC_URL}/service-worker.js`; | |||
if (isLocalhost) { | |||
// This is running on localhost. Let's check if a service worker still exists or not. | |||
checkValidServiceWorker(swUrl, config); | |||
// Add some additional logging to localhost, pointing developers to the | |||
// service worker/PWA documentation. | |||
navigator.serviceWorker.ready.then(() => { | |||
console.log( | |||
'This web app is being served cache-first by a service ' + | |||
'worker. To learn more, visit https://bit.ly/CRA-PWA' | |||
); | |||
}); | |||
} else { | |||
// Is not localhost. Just register service worker | |||
registerValidSW(swUrl, config); | |||
} | |||
}); | |||
} | |||
} | |||
function registerValidSW(swUrl, config) { | |||
navigator.serviceWorker | |||
.register(swUrl) | |||
.then(registration => { | |||
registration.onupdatefound = () => { | |||
const installingWorker = registration.installing; | |||
if (installingWorker == null) { | |||
return; | |||
} | |||
installingWorker.onstatechange = () => { | |||
if (installingWorker.state === 'installed') { | |||
if (navigator.serviceWorker.controller) { | |||
// At this point, the updated precached content has been fetched, | |||
// but the previous service worker will still serve the older | |||
// content until all client tabs are closed. | |||
console.log( | |||
'New content is available and will be used when all ' + | |||
'tabs for this page are closed. See https://bit.ly/CRA-PWA.' | |||
); | |||
// Execute callback | |||
if (config && config.onUpdate) { | |||
config.onUpdate(registration); | |||
} | |||
} else { | |||
// At this point, everything has been precached. | |||
// It's the perfect time to display a | |||
// "Content is cached for offline use." message. | |||
console.log('Content is cached for offline use.'); | |||
// Execute callback | |||
if (config && config.onSuccess) { | |||
config.onSuccess(registration); | |||
} | |||
} | |||
} | |||
}; | |||
}; | |||
}) | |||
.catch(error => { | |||
console.error('Error during service worker registration:', error); | |||
}); | |||
} | |||
function checkValidServiceWorker(swUrl, config) { | |||
// Check if the service worker can be found. If it can't reload the page. | |||
fetch(swUrl, { | |||
headers: { 'Service-Worker': 'script' }, | |||
}) | |||
.then(response => { | |||
// Ensure service worker exists, and that we really are getting a JS file. | |||
const contentType = response.headers.get('content-type'); | |||
if ( | |||
response.status === 404 || | |||
(contentType != null && contentType.indexOf('javascript') === -1) | |||
) { | |||
// No service worker found. Probably a different app. Reload the page. | |||
navigator.serviceWorker.ready.then(registration => { | |||
registration.unregister().then(() => { | |||
window.location.reload(); | |||
}); | |||
}); | |||
} else { | |||
// Service worker found. Proceed as normal. | |||
registerValidSW(swUrl, config); | |||
} | |||
}) | |||
.catch(() => { | |||
console.log( | |||
'No internet connection found. App is running in offline mode.' | |||
); | |||
}); | |||
} | |||
export function unregister() { | |||
if ('serviceWorker' in navigator) { | |||
navigator.serviceWorker.ready | |||
.then(registration => { | |||
registration.unregister(); | |||
}) | |||
.catch(error => { | |||
console.error(error.message); | |||
}); | |||
} | |||
} |
@@ -1,5 +0,0 @@ | |||
// jest-dom adds custom jest matchers for asserting on DOM nodes. | |||
// allows you to do things like: | |||
// expect(element).toHaveTextContent(/react/i) | |||
// learn more: https://github.com/testing-library/jest-dom | |||
import '@testing-library/jest-dom/extend-expect'; |
@@ -44,9 +44,7 @@ yuan_cell_phone_number = config.config['yuan_cell_phone_number'] | |||
master_wxid = config.config['master_wxid'] | |||
room_wxid = config.config['room_wxid'] | |||
set_yuan_account(yuan_account, yuan_cell_phone_number) | |||
set_yuan_account(yuan_account, yuan_cell_phone_number) | |||
# 注册消息回调 | |||
@@ -69,7 +67,11 @@ def on_recv_text_msg(wechat_instance: ntchat.WeChat, message): | |||
def on_recv_text_msg(wechat_instance: ntchat.WeChat, message): | |||
data = message["data"] | |||
wechat_instance.send_text(to_wxid=data["wxid"], content=f"您好~我是欧小鹏,一位能画能文的复合型人工智障。\n\n您可以回复【加群】加入内测交流群暨OpenI启智社区推广群。\n\n回复【盘古+input】可体验鹏城·盘古α大模型生成能力。如:“盘古 中国和美国和日本和法国和加拿大和澳大利亚的首都分别是哪里?”\n\n回复【文心+风格+prompt】可体验ERNIE-ViLG的AIGC图文生成能力(目前支持“水彩”、“油画”、“粉笔画”、“卡通”、“蜡笔画”、“儿童画”、“探索无限”七种风格),如“文心 油画 睡莲”。\n\n当然,您也可以和我自由对话。更多能力请加群后体验。\n所有响应均为模型生成结果,不代表项目作者观点!") | |||
wechat_instance.send_text( | |||
to_wxid=data["wxid"], | |||
content= | |||
f"您好~我是欧小鹏,一位能画能文的复合型人工智障。\n\n您可以回复【加群】加入内测交流群暨OpenI启智社区推广群。\n\n回复【盘古+input】可体验鹏城·盘古α大模型生成能力。如:“盘古 中国和美国和日本和法国和加拿大和澳大利亚的首都分别是哪里?”\n\n回复【文心+风格+prompt】可体验ERNIE-ViLG的AIGC图文生成能力(目前支持“水彩”、“油画”、“粉笔画”、“卡通”、“蜡笔画”、“儿童画”、“探索无限”七种风格),如“文心 油画 睡莲”。\n\n当然,您也可以和我自由对话。更多能力请加群后体验。\n所有响应均为模型生成结果,不代表项目作者观点!" | |||
) | |||
# 注册消息回调 | |||
@@ -83,18 +85,22 @@ def on_recv_text_msg(wechat_instance: ntchat.WeChat, message): | |||
room_wxid = data["room_wxid"] | |||
if from_wxid != self_wxid == master_wxid and data["msg"].split(' ')[0] == '转发': | |||
if from_wxid != self_wxid == master_wxid and data["msg"].split( | |||
' ')[0] == '转发': | |||
rooms = wechat.get_rooms() | |||
for i, room in enumerate(rooms): | |||
print(room,self_wxid, room['manager_wxid']) | |||
print(room, self_wxid, room['manager_wxid']) | |||
if room['is_manager'] == '1': | |||
room_wxid = room['wxid'] | |||
result = data["msg"].split(' ')[1] | |||
wechat_instance.send_room_at_msg(to_wxid=room_wxid, content="{$@},"+result,at_list=['notify@all']) | |||
wechat_instance.send_room_at_msg( | |||
to_wxid=room_wxid, | |||
content="{$@}," + result, | |||
at_list=['notify@all']) | |||
# 判断消息不是自己发的并且不是群消息时,回复对方 | |||
elif from_wxid != self_wxid and not room_wxid and data["msg"].split(' ')[0] == 'QA': | |||
elif from_wxid != self_wxid and not room_wxid and data["msg"].split( | |||
' ')[0] == 'QA': | |||
# 判断是否是QA问答关键字,形式为"QA XXXX",关键字与所咨询的问题之间以一个空格间隔 | |||
@@ -110,7 +116,8 @@ def on_recv_text_msg(wechat_instance: ntchat.WeChat, message): | |||
res_json = json.loads(result.text) | |||
if res_json['answer'] is None or len(res_json['answer']) == 0: | |||
wechat_instance.send_text(to_wxid=from_wxid, content="未查询到与之匹配的问题,请重新输入咨询内容。") | |||
wechat_instance.send_text( | |||
to_wxid=from_wxid, content="未查询到与之匹配的问题,请重新输入咨询内容。") | |||
return res_json | |||
i = 0 | |||
@@ -121,16 +128,17 @@ def on_recv_text_msg(wechat_instance: ntchat.WeChat, message): | |||
if i > 0: | |||
wechat_str = wechat_str + "------------------------\r\n" | |||
wechat_str = wechat_str + "问题" + str(i+1) + ": " + queryIndex['title'] + '\n' | |||
wechat_str = wechat_str + "回答" + str(i+1) + ": " + queryIndex['para'] + '\n' | |||
wechat_str = wechat_str + "问题" + str( | |||
i + 1) + ": " + queryIndex['title'] + '\n' | |||
wechat_str = wechat_str + "回答" + str( | |||
i + 1) + ": " + queryIndex['para'] + '\n' | |||
i = i + 1 | |||
# 将结果回复对方 | |||
wechat_instance.send_text(to_wxid=from_wxid, content=wechat_str) | |||
elif from_wxid != self_wxid and not room_wxid : | |||
elif from_wxid != self_wxid and not room_wxid: | |||
if data["msg"] == '加群': | |||
@@ -139,96 +147,125 @@ def on_recv_text_msg(wechat_instance: ntchat.WeChat, message): | |||
member.append(data['from_wxid']) | |||
# else: | |||
wechat_instance.send_text(to_wxid=from_wxid, content=f"启智社区(简称OpenI)是在国家实施新一代人工智能发展战略背景下,新一代人工智能产业技术创新战略联盟(AITISA)组织产学研用协作共建共享的开源平台与社区,以鹏城云脑科学装置及Trustie软件开发群体化方法为基础,全面推动人工智能领域的开源开放与协同创新。社区在“开源开放、尊重创新”的原则下,汇聚学术界、产业界及社会其他各界力量,努力建设成具有国际影响力的人工智能开源开放平台与社区。") | |||
sleep_time = random.randint(0,4) | |||
wechat_instance.send_text( | |||
to_wxid=from_wxid, | |||
content= | |||
f"启智社区(简称OpenI)是在国家实施新一代人工智能发展战略背景下,新一代人工智能产业技术创新战略联盟(AITISA)组织产学研用协作共建共享的开源平台与社区,以鹏城云脑科学装置及Trustie软件开发群体化方法为基础,全面推动人工智能领域的开源开放与协同创新。社区在“开源开放、尊重创新”的原则下,汇聚学术界、产业界及社会其他各界力量,努力建设成具有国际影响力的人工智能开源开放平台与社区。" | |||
) | |||
sleep_time = random.randint(0, 4) | |||
time.sleep(sleep_time) | |||
wechat_instance.send_text(to_wxid=from_wxid, content=f"入群后,您可以和启智社区的开发者进行交流。若您在使用中有任何问题,您可以在群内提出,提问格式可参考:\n 1. 问题描述:\n2. 相关环境:GPU / NPU\n3. 相关集群:启智/智算\n4. 任务名:\n5. 问题截图or log:") | |||
sleep_time = random.randint(0,4) | |||
wechat_instance.send_text( | |||
to_wxid=from_wxid, | |||
content= | |||
f"入群后,您可以和启智社区的开发者进行交流。若您在使用中有任何问题,您可以在群内提出,提问格式可参考:\n 1. 问题描述:\n2. 相关环境:GPU / NPU\n3. 相关集群:启智/智算\n4. 任务名:\n5. 问题截图or log:" | |||
) | |||
sleep_time = random.randint(0, 4) | |||
time.sleep(sleep_time) | |||
wechat_instance.add_room_member(room_wxid=room_wxid, member_list=member) | |||
sleep_time = random.randint(0,4) | |||
wechat_instance.add_room_member( | |||
room_wxid=room_wxid, member_list=member) | |||
sleep_time = random.randint(0, 4) | |||
time.sleep(sleep_time) | |||
wechat_instance.send_room_at_msg(to_wxid=room_wxid, content="{$@},欢迎加入欧小鹏内测交流群!详细内容可看群公告~\nGithub地址:https://github.com/thomas-yanxin/Wechat_bot;\nOpenI启智地址:https://git.openi.org.cn/Learning-Develop-Union/Wechat_bot\nPrompt可参考:https://github.com/PaddlePaddle/PaddleHub/blob/develop/modules/image/text_to_image/ernie_vilg/README.md#%E5%9B%9B-prompt-%E6%8C%87%E5%8D%97\n谢谢关注哇~", at_list=member) | |||
wechat_instance.send_room_at_msg( | |||
to_wxid=room_wxid, | |||
content= | |||
"{$@},欢迎加入欧小鹏内测交流群!详细内容可看群公告~\nGithub地址:https://github.com/thomas-yanxin/Wechat_bot;\nOpenI启智地址:https://git.openi.org.cn/Learning-Develop-Union/Wechat_bot\nPrompt可参考:https://github.com/PaddlePaddle/PaddleHub/blob/develop/modules/image/text_to_image/ernie_vilg/README.md#%E5%9B%9B-prompt-%E6%8C%87%E5%8D%97\n谢谢关注哇~", | |||
at_list=member) | |||
elif data["msg"].split(' ')[0] == '文心': | |||
elif data["msg"].split(' ')[0] == '文心' : | |||
style_input = data['msg'].split(' ')[1] | |||
text_prompt = data["msg"].split(' ')[2] | |||
n = 0 | |||
if style_input not in ['油画', "水彩", "粉笔画", "卡通", "儿童画", "蜡笔画", "探索无限"]: | |||
wechat_instance.send_text(to_wxid=from_wxid, content=f"目前ERNIE-VILG仅支持“油画、水彩、粉笔画、卡通、儿童画、蜡笔画、探索无限”七种风格,您输入的风格不在此列,请检查后重新输入!") | |||
else: | |||
wechat_instance.send_text(to_wxid=from_wxid, content=f"好哦~ 正在作画,请您耐心等待~!") | |||
# data_image = ernie_vilg(text_prompt, style_input) | |||
if style_input not in [ | |||
'油画', "水彩", "粉笔画", "卡通", "儿童画", "蜡笔画", "探索无限" | |||
]: | |||
wechat_instance.send_text( | |||
to_wxid=from_wxid, | |||
content= | |||
f"目前ERNIE-VILG仅支持“油画、水彩、粉笔画、卡通、儿童画、蜡笔画、探索无限”七种风格,您输入的风格不在此列,请检查后重新输入!" | |||
) | |||
else: | |||
wechat_instance.send_text( | |||
to_wxid=from_wxid, content=f"好哦~ 正在作画,请您耐心等待~!") | |||
# data_image = ernie_vilg(text_prompt, style_input) | |||
# if type(data_image) == 'str': | |||
# wechat_instance.send_text(to_wxid=from_wxid, content=f"对不起,您的输入存在敏感词, 请重新输入") | |||
image_list = asyncio.run(image_url(text=text_prompt, style=style_input, file_path=file_path)) | |||
image_list = asyncio.run( | |||
image_url( | |||
text=text_prompt, | |||
style=style_input, | |||
file_path=file_path)) | |||
for image_path in image_list: | |||
time.sleep(1) | |||
wechat_instance.send_image(to_wxid=from_wxid, file_path=image_path) | |||
wechat_instance.send_image( | |||
to_wxid=from_wxid, file_path=image_path) | |||
time.sleep(5) | |||
wechat_instance.send_text(to_wxid=from_wxid, content=f"图像已生成完毕,希望您能喜欢~") | |||