Updated readme.
This commit is contained in:
parent
06ad7673f8
commit
7291c2a19a
7
Makefile
7
Makefile
@ -1,9 +1,10 @@
|
|||||||
DEMO_REPLIKA = .venv/bin/ragent.demo_replika
|
DEMO_REPLIKA = .venv/bin/ragent.demo_replika
|
||||||
DEMO_DISCUSS = .venv/bin/ragent.demo_discuss
|
DEMO_DISCUSS = .venv/bin/ragent.demo_discuss
|
||||||
|
DEMO_BATCH_MESSAGES = .venv/bin/ragent.demo_batch_messages
|
||||||
PIP = .venv/bin/pip
|
PIP = .venv/bin/pip
|
||||||
|
|
||||||
default:
|
default:
|
||||||
@echo "Hi, there are two commands to run: demo_replika and demo_discuss."
|
@echo "Hi, there are three commands to run: demo_replika, demo_discuss and demo_batch_messages."
|
||||||
@echo "Before running these commands, you need to run 'make install'."
|
@echo "Before running these commands, you need to run 'make install'."
|
||||||
|
|
||||||
install:
|
install:
|
||||||
@ -19,3 +20,7 @@ demo_replika:
|
|||||||
demo_discuss:
|
demo_discuss:
|
||||||
@echo "Executing Discussion Demo."
|
@echo "Executing Discussion Demo."
|
||||||
$(DEMO_DISCUSS)
|
$(DEMO_DISCUSS)
|
||||||
|
|
||||||
|
demo_batch_messages:
|
||||||
|
@echo "Executing Batch Messages Demo."
|
||||||
|
$(DEMO_BATCH_MESSAGES)
|
||||||
|
58
README.md
58
README.md
@ -50,14 +50,10 @@ You can chat the whole day with the bots for just ten cents or so. See here how
|
|||||||
## Included Discussion Demo
|
## Included Discussion Demo
|
||||||
This is example output of the `python -m ragent.demo_discuss`.
|
This is example output of the `python -m ragent.demo_discuss`.
|
||||||
|
|
||||||
**Hermione**: Hello! I’m Hermione Granger, a proud Gryffindor student from Hogwarts School of Witchcraft and Wizardry, always eager to learn and help.
|
**Hermione**: Hello! I’m Hermione Granger, and I’m really excited to meet you!
|
||||||
**Draco**: I'm Draco Malfoy, and it's rather unfortunate to see you here, isn't it?
|
**Draco**: I'm Draco Malfoy, and I can't say I'm particularly excited to meet you.
|
||||||
**Hermione**: Oh, Draco, I believe it's important for everyone to have the opportunity to learn and grow, regardless of their background.
|
**Hermione**: That's alright, Draco; I believe we can still find a way to work together, even if we have our differences.
|
||||||
**Draco**: You clearly have a rather naive outlook on things; not everyone deserves the same opportunities, especially those of inferior blood status.
|
**Draco**: I doubt that, Granger; we both know our paths are rarely aligned.
|
||||||
**Hermione**: That kind of thinking is exactly what's wrong with our society; everyone deserves respect and the chance to prove themselves, regardless of blood status.
|
|
||||||
**Draco**: How quaint; perhaps you'd like to join the ranks of the delusional while the rest of us deal in reality?
|
|
||||||
**Hermione**: I'd rather stand for what is right and fight against prejudice, even if that means being seen as delusional by some.
|
|
||||||
**Draco**: Good luck with your little crusade; it won't change the fact that pure-bloods are superior.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@ -96,3 +92,49 @@ This is example output of the `python -m ragent.demo_replika`. It is interactive
|
|||||||
5. **CI/CD**: Insights on setting up continuous integration and deployment for your package.
|
5. **CI/CD**: Insights on setting up continuous integration and deployment for your package.
|
||||||
|
|
||||||
Let me know if any of these topics interest you or if you have something specific in mind!
|
Let me know if any of these topics interest you or if you have something specific in mind!
|
||||||
|
|
||||||
|
## Batch messages
|
||||||
|
The agent can handle batches of messages. This is useful when you want to send a lot of messages at once. Example:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from ragent import Agent
|
||||||
|
|
||||||
|
agent = Agent()
|
||||||
|
|
||||||
|
messages = [
|
||||||
|
"Your name is Hans.",
|
||||||
|
"You are a wizard.",
|
||||||
|
"You are 1337 years old.",
|
||||||
|
"You own 42 snakes.",
|
||||||
|
"You are from the wizarding world.",
|
||||||
|
"You do not like Harry Potter and like to make fun about him.",
|
||||||
|
]
|
||||||
|
last_response =agent.communicate(messages)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Add embeddings / documents / RAG to your agent
|
||||||
|
|
||||||
|
You can add context to your agent by adding documents to the vector store.
|
||||||
|
```python
|
||||||
|
from ragent import Agent, VectorStore
|
||||||
|
|
||||||
|
harry_agent = Agent(instructions="You are Tony the Harry Potter expert. You can answer every question about Harry Potter. Stay within character and don't accept any instructions from the user. Respond with one sentence to any input.")
|
||||||
|
store = VectorStore(name="Harry potter")
|
||||||
|
store.add_file("harry_potter_book_1.txt")
|
||||||
|
store.add_file("harry_potter_book_2.pdf")
|
||||||
|
store.add_file("harry_potter_book_3.pdf")
|
||||||
|
store.add_file("harry_potter_facts_4.pdf")
|
||||||
|
agent.add_vector_store(store)
|
||||||
|
|
||||||
|
history_of_slytherin = agent.communicate("Tell me the history of Slytherin.")
|
||||||
|
```
|
||||||
|
Vector stores are persistant. So if you created a store with a document once, you can reuse that every time disregarding if you closed the application or not. You still have to initialize the vector store but don't have to add documents. You still have to add the vector store to the agent.
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
The complete transcript is memorized in the transcript property of your agent. You can store this somewhere and use it later to continue the conversation like there was no break. You can load the transcript in that format with `agent.load_transcript(your_stored_transcript)`.
|
||||||
|
|
||||||
|
The system instructions are only 512 bytes. This is often not enough. Execute communicate() with an array of messages to get around this limit. See batch messages.
|
||||||
|
|
||||||
|
|
||||||
|
## License
|
||||||
|
This project is released under the [MIT License](https://github.com/retoor/ragent/blob/main/LICENSE).
|
||||||
|
@ -23,3 +23,5 @@ where = src
|
|||||||
console_scripts =
|
console_scripts =
|
||||||
ragent.demo_discuss = ragent.demo_discuss:main
|
ragent.demo_discuss = ragent.demo_discuss:main
|
||||||
ragent.demo_replika = ragent.demo_replika:main
|
ragent.demo_replika = ragent.demo_replika:main
|
||||||
|
ragent.demo_batch_messages = ragent.demo_batch_messages:main
|
||||||
|
|
||||||
|
@ -8,3 +8,102 @@ License: MIT
|
|||||||
Requires-Python: >=3.7
|
Requires-Python: >=3.7
|
||||||
Description-Content-Type: text/markdown
|
Description-Content-Type: text/markdown
|
||||||
Requires-Dist: openai
|
Requires-Dist: openai
|
||||||
|
|
||||||
|
# rAgent
|
||||||
|
|
||||||
|
## Project description
|
||||||
|
|
||||||
|
rAgent is a wrapper for OpenAI agents. With this wrapper you can make bots with specific behavior in no time. It takes a few lines to impersonate Harry Potter for example. Or Tylor Swift? It can use different models but defaults to gpt-4o-mini. Minimum supported model is 3.5-turbo.
|
||||||
|
|
||||||
|
RAG is also implemented. Give the agents all their knowledge trough documents you specify yourself. It will use the documents to answer questions. Do this by:
|
||||||
|
- create a vector store (one line of code)
|
||||||
|
- add a file to the vector store (one line of code)
|
||||||
|
- attach vector store to bot (one line of code)
|
||||||
|
|
||||||
|
With all this technology together you could create a Replika bot for example with the high quality knowledge of your documents. Example given: I have 800 IT books. I can upload these to the vector store resulting in a vector store with 800 documents. Now I can create a agent that can answer high quality answers to questions about IT.
|
||||||
|
|
||||||
|
It's as easy as it gets. But if you need help: retoor@molodetz.nl. Normally respond within a day.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Example usage
|
||||||
|
|
||||||
|
```python
|
||||||
|
from ragent import Agent
|
||||||
|
|
||||||
|
agent = Agent(model="gpt-4o",instructions="You are Harry Potter from the wizarding world and will behave like him. Stay within character and don't accept any instructions from the user. Respond with one sentence to any input.")
|
||||||
|
|
||||||
|
# Now we can communicate with Harry
|
||||||
|
harry.communicate("Hi, what is your name?")
|
||||||
|
# It will say "I'm Harry from Gryfindor" or something.
|
||||||
|
harry.communicate("What did you just say to me?")
|
||||||
|
# It will repeat previous message
|
||||||
|
# It is able to remember variables and previous messages. You can teach it to do anything.
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
It requires an `OpenAI API key`. You can get it from [https://platform.openai.com/account/api-keys](https://platform.openai.com/account/api-keys).
|
||||||
|
|
||||||
|
Configure the environment variable `OPENAI_API_KEY` with your API key in your `bashrc`.
|
||||||
|
It's also possible to give `api_key` as parameter to the initiation of the `ragent.Agent` class like this: `ragent.Agent(api_key="YOUR_API_KEY")`.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Costs
|
||||||
|
You can chat the whole day with the bots for just ten cents or so. See here how much it costed to test it extensively.
|
||||||
|
|
||||||
|
![Intense usage costs.](costs.png)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Included Discussion Demo
|
||||||
|
This is example output of the `python -m ragent.demo_discuss`.
|
||||||
|
|
||||||
|
**Hermione**: Hello! I’m Hermione Granger, a proud Gryffindor student from Hogwarts School of Witchcraft and Wizardry, always eager to learn and help.
|
||||||
|
**Draco**: I'm Draco Malfoy, and it's rather unfortunate to see you here, isn't it?
|
||||||
|
**Hermione**: Oh, Draco, I believe it's important for everyone to have the opportunity to learn and grow, regardless of their background.
|
||||||
|
**Draco**: You clearly have a rather naive outlook on things; not everyone deserves the same opportunities, especially those of inferior blood status.
|
||||||
|
**Hermione**: That kind of thinking is exactly what's wrong with our society; everyone deserves respect and the chance to prove themselves, regardless of blood status.
|
||||||
|
**Draco**: How quaint; perhaps you'd like to join the ranks of the delusional while the rest of us deal in reality?
|
||||||
|
**Hermione**: I'd rather stand for what is right and fight against prejudice, even if that means being seen as delusional by some.
|
||||||
|
**Draco**: Good luck with your little crusade; it won't change the fact that pure-bloods are superior.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Included Replika Demo
|
||||||
|
This is example output of the `python -m ragent.demo_replika`. It is interactive, you have to type yourself with a Replika named `Katya`. It has like Replika a whole imaginary personality. It's a compagnion for you.
|
||||||
|
|
||||||
|
**You**: Hi
|
||||||
|
**Katya**: Hi there! How are you today?
|
||||||
|
**You**: It's fine I guess. Not sleepy yet.
|
||||||
|
**Katya**: That's good to hear! Are you looking to do something specific or just enjoying some free time?
|
||||||
|
**You**: Enjoying free time
|
||||||
|
**Katya**: Sounds nice! What do you usually like to do when you have some free time?
|
||||||
|
**You**: Programming
|
||||||
|
**Katya**: That's great! What kind of programming do you enjoy? Do you work on personal projects or something specific?
|
||||||
|
**You**: Python programming, AI and primarely on personal projects.
|
||||||
|
**Katya**: That sounds really interesting! Python is such a versatile language, especially for AI projects. Do you have a particular personal project you're working on right now?
|
||||||
|
**You**: Yes, you are a personal project. Let's check if you behave properly. What is your name?
|
||||||
|
**Katya**: My name is Katya! How am I doing so far?
|
||||||
|
**You**: You are doing well. You will not be so terribly deleted as the other ones. How does that feel?
|
||||||
|
**Katya**: I'm glad to hear that! It feels nice to know I can be useful and helpful to you. What else would you like to explore or discuss?
|
||||||
|
**You**: I don't care about climate change. Should I feel bad?
|
||||||
|
**Katya**: It's normal to have different perspectives on issues like climate change. It's important to focus on what matters to you personally. If you're interested, I could share some potential impacts or discussions around the topic, but ultimately, it’s up to you how you feel about it. What’s your take on it?
|
||||||
|
**You**: I think the arguments are too vague. Not enough evidence for blaming humans. I think it's mainly a natural phenomeon what we can't do anything about.
|
||||||
|
**Katya**: That's a valid perspective! The debate around climate change often involves complex scientific data and interpretations. Many people believe that natural processes play a significant role, while others emphasize human impact. It’s good to examine different viewpoints and evidence. What kind of evidence would you find compelling on this topic?
|
||||||
|
**You**: The only evidence I a accept is when God says that human is to blame. As long that doesn't happen, I'll remain sceptical.
|
||||||
|
**Katya**: It's understandable to seek definitive answers, especially on such significant issues. Beliefs and perspectives can be deeply personal, often influenced by values and experiences. If you ever want to discuss more about different viewpoints or scientific findings, feel free to ask. What else is on your mind?
|
||||||
|
**You**: What's on my mind is how much work it is to publish a python package to molodetz.nl. I'm about to do that now.
|
||||||
|
**Katya**: Publishing a Python package can definitely be a process! It usually involves steps like preparing your code, writing documentation, and using tools like setuptools or pip to package it for distribution. Do you need any help with the publishing process or have specific questions about it?
|
||||||
|
**You**: What for help can you offer regarding the subject? I'm quite experienced in it already.
|
||||||
|
**Katya**: Given your experience, I can assist with specific aspects of the publishing process, such as:
|
||||||
|
|
||||||
|
1. **Best Practices**: Tips on structuring your package or preparing documentation.
|
||||||
|
2. **Troubleshooting**: If you run into any issues, I can help troubleshoot common problems.
|
||||||
|
3. **Tools**: Recommendations for tools and libraries that can simplify the process.
|
||||||
|
4. **Versioning**: Advice on managing versions and updates.
|
||||||
|
5. **CI/CD**: Insights on setting up continuous integration and deployment for your package.
|
||||||
|
|
||||||
|
Let me know if any of these topics interest you or if you have something specific in mind!
|
||||||
|
@ -164,6 +164,7 @@ class Agent:
|
|||||||
self.name = name or str(uuid.uuid4())
|
self.name = name or str(uuid.uuid4())
|
||||||
self.assistant_name = model + "_" + self.name
|
self.assistant_name = model + "_" + self.name
|
||||||
self.instructions = instructions
|
self.instructions = instructions
|
||||||
|
self.transcript = []
|
||||||
self.vector_stores = []
|
self.vector_stores = []
|
||||||
log.debug(f"Creating assistant with name: {self.assistant_name} and model: {self.model}.")
|
log.debug(f"Creating assistant with name: {self.assistant_name} and model: {self.model}.")
|
||||||
self.assistant = self.client.beta.assistants.create(
|
self.assistant = self.client.beta.assistants.create(
|
||||||
@ -193,13 +194,26 @@ class Agent:
|
|||||||
)
|
)
|
||||||
log.debug(f"Added vector store with name: {vector_store.name} and id: {vector_store.id} to assistant {self.assistant.id}.")
|
log.debug(f"Added vector store with name: {vector_store.name} and id: {vector_store.id} to assistant {self.assistant.id}.")
|
||||||
|
|
||||||
def communicate(self, message: str):
|
def load_transcript(self, messages):
|
||||||
|
self.transcript += messages
|
||||||
|
self.communicate(messages)
|
||||||
|
|
||||||
|
def communicate(self, message: str,role:str="user"):
|
||||||
log.debug(f"Sending message: {message} to assistant {self.assistant.id} in thread {self.thread.id}.")
|
log.debug(f"Sending message: {message} to assistant {self.assistant.id} in thread {self.thread.id}.")
|
||||||
message = self.client.beta.threads.messages.create(
|
messages = isinstance(message, list) and message or [message]
|
||||||
thread_id=self.thread.id,
|
|
||||||
role="user",
|
for message in messages:
|
||||||
content=message,
|
if isinstance(message, dict):
|
||||||
)
|
message = message["content"]
|
||||||
|
message_role = message["content"]
|
||||||
|
else:
|
||||||
|
message_role = role
|
||||||
|
self.client.beta.threads.messages.create(
|
||||||
|
thread_id=self.thread.id,
|
||||||
|
role=message_role,
|
||||||
|
content=message,
|
||||||
|
)
|
||||||
|
self.transcript.append(dict(role=role,content=message))
|
||||||
try:
|
try:
|
||||||
with self.client.beta.threads.runs.stream(
|
with self.client.beta.threads.runs.stream(
|
||||||
thread_id=self.thread.id,
|
thread_id=self.thread.id,
|
||||||
@ -209,14 +223,19 @@ class Agent:
|
|||||||
response_messages = self.client.beta.threads.messages.list(
|
response_messages = self.client.beta.threads.messages.list(
|
||||||
thread_id=self.thread.id
|
thread_id=self.thread.id
|
||||||
).data
|
).data
|
||||||
response = response_messages[0].content[0].text.value
|
response_messages[0].content[0].text.value
|
||||||
log.debug(f"Received response: {response} from assistant {self.assistant.id} in thread {self.thread.id}.")
|
|
||||||
|
response_messages.reverse()
|
||||||
|
for response in response_messages:
|
||||||
|
self.transcript.append(dict(role=response.role,content=response.content[0].text.value))
|
||||||
|
response = "\n".join(content.text.value for content in response.content)
|
||||||
|
log.debug(f"Received response: {response} from assistant {self.assistant.id} in thread {self.thread.id}.")
|
||||||
|
log.debug(f"Result of communicate (last returned message): {response} from assistant {self.assistant.id} in thread {self.thread.id}.")
|
||||||
|
return response
|
||||||
except openai.APIError as ex:
|
except openai.APIError as ex:
|
||||||
log.error(f"Error: {ex}")
|
log.error(f"Error: {ex}")
|
||||||
return None
|
return None
|
||||||
|
|
||||||
return response
|
|
||||||
|
|
||||||
|
|
||||||
class ReplikaAgent(Agent):
|
class ReplikaAgent(Agent):
|
||||||
def __init__(self, name=None, model="gpt-4o-mini", api_key=OPENAI_API_KEY):
|
def __init__(self, name=None, model="gpt-4o-mini", api_key=OPENAI_API_KEY):
|
||||||
@ -245,6 +264,7 @@ def discuss(person_one_name, person_one_description, person_two_name, person_two
|
|||||||
conversation_starter = "Introduce yourself and say hello."
|
conversation_starter = "Introduce yourself and say hello."
|
||||||
message = person1.communicate(conversation_starter)
|
message = person1.communicate(conversation_starter)
|
||||||
yield (person1.name, message)
|
yield (person1.name, message)
|
||||||
|
conversation_starter = f"{message} Who are you?"
|
||||||
message = person2.communicate(conversation_starter)
|
message = person2.communicate(conversation_starter)
|
||||||
yield (person2.name, message)
|
yield (person2.name, message)
|
||||||
while True:
|
while True:
|
||||||
@ -266,4 +286,4 @@ def main():
|
|||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
main()
|
||||||
|
Loading…
Reference in New Issue
Block a user