Invoke LLM
This endpoint allows you to invoke a language model using specific parameters, such as the expert and model IDs, input data, and additional configuration options.
Ensure that you replace YOUR_API_KEY_HERE
with the actual API key generated
from your profile in the B-Bot hub.
The temperature
parameter is optional. If not provided, a default value will
be used.
The max_tokens
parameter is optional. If not provided, a default value will
be used.
Authorizations
API key for authentication
Body
The ID of the expert you want to invoke.
The ID of the language model to be used.
A list of message objects containing the role and content of the input data.
The sampling temperature, which controls the randomness of the model's outputs.
The maximum number of tokens (words or word pieces) to generate in the response.