Skip to main content
GET
/
cloud
/
v3
/
inference
/
models
/
{model_id}
Get model from catalog
curl --request GET \
  --url https://api.gcore.com/cloud/v3/inference/models/{model_id} \
  --header 'Authorization: <api-key>'
{
  "category": "Text Classification",
  "default_flavor_name": "inference-16vcpu-232gib-1xh100-80gb",
  "description": "<string>",
  "developer": "Stability AI",
  "documentation_page": "/docs",
  "eula_url": "https://example.com/eula",
  "example_curl_request": "curl -X POST http://localhost:8080/predict -d '{\"data\": \"sample\"}'",
  "has_eula": true,
  "id": "3c90c3cc-0d44-4b50-8888-8dd25736052a",
  "image_registry_id": "123e4567-e89b-12d3-a456-426614174999",
  "image_url": "<string>",
  "inference_backend": "torch",
  "inference_frontend": "gradio",
  "model_id": "mistralai/Pixtral-12B-2409",
  "name": "<string>",
  "openai_compatibility": "full",
  "port": 123,
  "version": "v0.1"
}

Authorizations

Authorization
string
header
required

API key for authentication. Make sure to include the word apikey, followed by a single space and then your token. Example: apikey 1234$abcdef

Path Parameters

model_id
string
required

Model ID

Response

200 - application/json

OK

category
string | null
required

Category of the model.

Example:

"Text Classification"

default_flavor_name
string | null
required

Default flavor for the model.

Example:

"inference-16vcpu-232gib-1xh100-80gb"

description
string
required

Description of the model.

developer
string | null
required

Developer of the model.

Example:

"Stability AI"

documentation_page
string | null
required

Path to the documentation page.

Example:

"/docs"

eula_url
string | null
required

URL to the EULA text.

Example:

"https://example.com/eula"

example_curl_request
string | null
required

Example curl request to the model.

Example:

"curl -X POST http://localhost:8080/predict -d '{\"data\": \"sample\"}'"

has_eula
boolean
required

Whether the model has an EULA.

id
string<uuid>
required

Model ID.

image_registry_id
string | null
required

Image registry of the model.

Example:

"123e4567-e89b-12d3-a456-426614174999"

image_url
string
required

Image URL of the model.

inference_backend
string | null
required

Describing underlying inference engine.

Example:

"torch"

inference_frontend
string | null
required

Describing model frontend type.

Example:

"gradio"

model_id
string | null
required

Model name to perform inference call.

Example:

"mistralai/Pixtral-12B-2409"

name
string
required

Name of the model.

openai_compatibility
string | null
required

OpenAI compatibility level.

Example:

"full"

port
integer
required

Port on which the model runs.

version
string | null
required

Version of the model.

Example:

"v0.1"