tencent cloud

Feedback

LLM configuration

Last updated: 2024-02-19 14:35:34
    Through affinity with third-party LLM engines, you can endow intelligent robots with anthropomorphic and diversified response abilities.
    Upon establishing affinity with LLM, the intelligent robot will relay messages sent to the bot account by users to the LLM platform, subsequently transmitting the LLM's responses to the users.
    Note:
    Should you wish to leverage the grand model's capabilities, kindly commence by registering a grand model account on a third-party LLM engine platform and manage it on the chat AI chatbot. Currently, OpenAI ChatGPT is supported.

    Configuring the LLM engine

    1. Access the basic configuration page for chatbot, and click on the Q&A strategy settings card and then, proceed to bind.
    
    
    
    2. In the popped-up window, select the large model engine and enter the account information set up in the LLM platform.
    
    
    
    3. Click Start using to complete the large model engine bind.
    
    
    

    Configuring the LLM engine

    Upon completion of the large model engine affinity, you are permitted to modify the large model engine's model information and parameters, or initiate customized training for the large model.
    1. Navigate to the robot's base configuration page and click on the Q&A Strategy Settings card's configuration.
    
    
    
    2. Modify the Model information and parameters or personalized training within the invoked large model configuration pop-up window
    
    
    
    3. Click Save to update the LLM engine configuration.

    Large model response methods

    After incorporating the LLM engine into the chatbot, you are able to utilize their combined capabilities through two methods.

    Method 1: Response by the LLM when the knowledge base is not hit

    Within the Q&A Strategy Settings card, select Custom Q&A, then activate Large model fallback in the underlying Fallback reply rule zone.
    Under this format, the user's query shall be given priority for a match in the knowledge base:
    IIf the knowledge base is hit (this includes direct answer, small talk, or clarification), the response shall be given in accordance to the knowledge base query.
    If the knowledge base is not hit, the response will be directly supplied by the LLM engine.
    
    
    

    Method Two: Direct responses from the LLM engine

    Within the Q&A Strategy Settings card, select Large Model Answer pattern.
    With this pattern, questions from the user are answered directly by the LLM engine.
    
    
    
    Contact Us

    Contact our sales team or business advisors to help your business.

    Technical Support

    Open a ticket if you're looking for further assistance. Our Ticket is 7x24 avaliable.

    7x24 Phone Support