Inference APIs allow you to invoke AI models defined in your project’s manifest with less scaffolding.

We’re introducing new APIs consistently through ongoing development with build partners. Let’s chat about what would make the Functions SDK even more powerful for your next use case!

generateText

Invoke a generative AI model with an instruction and prompt, resulting in a text response.

generateText (
  modelName: string,
  instruction: string,
  prompt: string
): string
modelName
string
required

Internal name of your model, as defined in your manifest.

instruction
string
required

High-level instruction for the processing of the given prompt.

prompt
string
required

Queries for response within the context of the given instruction.

computeClassificationLabels

Invoke a fine-tuned classification model with a text input, resulting in an array of labels and probabilities.

computeClassificationLabels (
  modelName: string,
  text: string
): Map<string, number>
modelName
string
required

Internal name of your model, as defined in your manifest.

text
string
required

Text input for classification amongst labels defined in the fine-tuning process.

embedText

Invoke an embedding model with a text input, resulting in a vector embedding.

embedText (
  modelName: string,
  text: string,
): number
modelName
string
required

Internal name of your model, as defined in your manifest.

text
string
required

Text input for embedding.