Last updated:2024-02-20



Here, you can configure various basic parameters for the Bot. The configuration results will directly affect the performance of the Bot.



Here, you can set the LLM parameters for the Bot.

  • LLM: The LLM used by the Bot.
  • Temperature: Can also be understood as "creativity". The higher the value, the stronger the divergence of the LLM, that is, the LLM can reply with very creative results, but at the same time, it means there is a possibility of uncontrollability; the lower the value, the stronger the convergence, that is, the LLM's reply will be more rigorous and stable.
  • Identity prompt: Determines "what kind of entity" the Bot will become. You can set the role, goals, tasks, processes, limitations, skills, etc. for the Bot in the identity prompt, so that the Bot has a clearer understanding of its role and can ultimately perform tasks as you expect. You can also click on the "AI" in the top right corner to optimize the content based on your current identity prompt. The AI will write a more efficient and scientific identity prompt for you based on the scientific "Prompt Engineering" framework.
  • Token config: Used to allocate the proportion of different types of content occupying the LLM context window.

Knowledge Base

The bot will use the user's input to perform vector retrieval in the knowledge base, recall several knowledge chunks as part of the context, and submit them to the LLM to perform tasks.


Here, you can set how the Bot uses the knowledge base.

  • Knowledge Quantity: The number of knowledge documents available for retrieval.
  • Recall Mechanism:
    • Relevance: The semantic relevance between the user's question and the knowledge chunk, only the knowledge chunk with a value greater than or equal to this will be used as a reference for the Bot's answer.
    • Max Recall Num: The maximum number of knowledge chunks recalled after retrieval.
    • Retrieval Weight: During retrieval, the weight distribution of vector retrieval and keyword retrieval strategies.
  • Empty Knowledge Retrieval: The Bot's response strategy when the recall quantity of the knowledge base is 0.
  • Knowledge Reference Display: Whether to display the source knowledge information of each LLM response result in the dialogue interface.



Here, you can configure the Tools needed by the Bot.

You can define the timing of using the Tools for the Bot within the identity prompt. Structure:

Translate this to English: Use the {Tool} plugin/tool to {purpose/task} when {timing}.
          Translate this to English: Use the {Tool} plugin/tool to {purpose/task} when {timing}.

This code block in the floating window

For example, if you want the Bot to call DALL-E-3 to generate painting based on the main content of the generated story, you can write it like this:

use the `DALL-E-3` plugin to generate cartoons style paintings for the pivotal scenes of the story when the whole story generation is done.
          use the `DALL-E-3` plugin to generate cartoons style paintings for the pivotal scenes of the story when the whole story generation is done.

This code block in the floating window



Here, you can configure the memory capabilities that the Bot will use during conversations.

  • Short-term Memory: You can set it to remember the content of the most recent few rounds of conversation, where "one question and one answer" count as one round.
  • Long-term Memory: This will remember longer conversation content.
  • User Attributes: This will preset the user's attributes in the memory, allowing the Bot to have personalized information about the user as knowledge, thereby providing better personalized service.

If the memory capability is turned off, the conversation with the Bot will not have contextual understanding, and each round of conversation will be independent.

Welcome & Guide


Here, you can configure the welcome message and user guide information for the Bot.

  • Welcome message: When a user visits the Bot, the Bot will automatically greet the user with this welcome message.
  • Suggested questions: After the Bot finishes replying, the Bot will automatically provide the user with 3 questions that can be used to continue the conversation, guiding the user to continue the conversation with the Bot.



  • Maximum number of images: The maximum number of images that can be entered when conversing with the Bot.
  • Upload image quality: The quality of the images input to the Bot. The higher the quality, the better the response quality of MLLM.
  • Voice input: Whether to allow voice input to the Bot. This capability can convert the user's voice input into text and submit it to the Bot.



  • Voice output: Whether to allow the Bot's output content to be converted to voice.
  • Sound: The voice of the Bot's output content.
  • Sound quality: The sound quality of the Bot's output content.
  • Text output language: Define the output language of the Bot.



Here, you can converse with the Bot and feel the effect of parameter adjustments in real time.

At the same time, you can also use the content of the conversation here to train the Bot.