Multi Modal Generation
This document shows you how to generate responses and manage the generations.
List Available Models
This list the models available in the AI Library platform.
There are multiple models available in the platform for different types of data - text, image, and speech. The models are provided by different providers like OpenAI, Anthropic, Google, Stability, and AI Library.
Optional attributes
- Name
provider
- Type
- string
- Required
- Description
Filter the models by provider. Available providers are
openai
,anthropic
,google
,stability
,ailibrary
Request
curl -G https://api.ailibrary.ai/v1/models \
-H "X-Library-Key: your-api-key"
Response
[
{
"modelId": "gpt-4-turbo@openai",
"name": "GPT-4 Turbo",
"model": "gpt-4-turbo",
"provider": "OpenAI",
"logo": "https://ailib-public.s3.us-west-2.amazonaws.com/public/openai.png",
"type": "text-to-text"
},
...
]
Generate Response
This endpoint allows you to generate responses from multiple AI models and specify knowledge base for Retrieval Augmented Generation (RAG). All responses are in markdown format. In case of image and audio, the format of the response is ![Alt-text](https://example-link.com)
.
Request body
- Name
modelId
- Type
- string
- Required
- required
- Description
Unique identifier of the model in the format
model@provider
. Check the list of available models, here
- Name
prompt
- Type
- string
- Required
- required
- Description
Your prompt
- Name
knowledgeId
- Type
- string
- Required
- Description
Add the knowledge id for knowledge retrieval.
Response body
- Name
text
- Type
- markdown
- Required
- Description
Response is generated in markdown format. Images are also available in this field. Image format is
![Alt-text](https://example-link.com)
.
- Name
audio
- Type
- string
- Required
- Description
Generated audio is available as a link and can be embedded directly in your application.
- Name
context
- Type
- json
- Required
- Description
Context of the prompt is available in raw json format. The format is variable and depends on the kind of knowledge base. This is useful for debugging and understanding the context of the prompt.
Request
import requests
import json
url = "https://api.ailibrary.ai/v2/generate"
payload = json.dumps({
"modelId": "model@provider",
"prompt": "your-prompt",
"urls": [
"file-id" // optional
],
"knowledge": [
{
"id": "knowledge-id"
}
]
})
headers = {
'X-Library-Key': '••••••',
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
Response
{
"text": "Generated text in markdown format from the model",
"audio": "",
"provider": "openai",
"model": "gpt-3.5-turbo-instruct",
"prompt": "US presidential elections 2024",
"knowledgeId": "news",
"context": [
"... context of the prompt...",
],
"tokens": 1087,
"userName": "Arani Chaudhuri",
"userEmail": "arani@ailibrary.ai"
}