Models
Easily invoke AI models from your functions.
Hypermode’s Models API allows you to invoke AI models directly from your functions, both for models hosted at Hypermode, and for models hosted by external services.
Since many models have unique interfaces, the design of the Models API is extremely flexible. A common base class forms the core of the API, which extends to conform to any model’s required schema.
A separate library, models-as
contains
both the base classes and pre-defined implementations for many commonly used models.
You can either use one of the pre-defined classes, or can create custom classes for
any model you like, by extending from the base Model
class.
Example project
For your reference, several complete examples for using the Models API are available
on GitHub in the hypermodeinc/functions-as
repository:
Each example demonstrates using different types of AI models for different purposes. However, the Models interface isn’t limited to these purposes. You can use it for any task that an AI model can perform.
Currently, the models interface doesn’t support streaming data, either for input or output. For example, you can’t use it with models that process streaming audio or video, nor provide streaming text output. We plan to address this limitation in a future release.
Import from the SDK
To begin, import the models
namespace from the SDK.
You’ll also need to import one or more classes for the model you are working with.
For example:
Models APIs
The APIs in the models
namespace are below, organized by category.
We’re constantly introducing new APIs through ongoing development with build partners. Let’s chat about what would make the Functions SDK even more powerful for your next use case!
Functions
getModel
Get a model instance by name and type.
models.getModel<T>(modelName: string): T
The type of model to return. This can be any class that extends the Model
base class.
The name of the model to retrieve. This must match the name of a model defined in your project’s manifest file.
Objects
Model
abstract class Model<TInput, TOutput> {
debug: boolean;
info: ModelInfo;
invoke(input: TInput): TOutput;
}
The base class for all models that Hypermode functions can invoke.
If you are implementing a custom model, you should extend this class.
You’ll also need classes to represent the input and output types for your model.
See the implementations of the pre-defined models in the models-as
repository
for examples.
The type of the input data for the model. This can be any type, including a custom type defined in your project. It should match the shape of the data expected by the model. It’s usually a class.
The type of the output data from the model. This can be any type, including a custom type defined in your project. It should match the shape of the data returned by the model. It’s usually a class.
A flag to enable debug mode for the model. When enabled, Hypermode
automatically logs the full request and response data to the console.
implementations can also use this flag to enable additional debug logging.
Defaults to false
.
Information about the model set by the Hypermode Runtime when creating the
instance. See the ModelInfo
object for more information.
Invokes the model with input data and returns the output data.
ModelInfo
class ModelInfo {
readonly name: string;
readonly fullName: string;
}
Information about a model that’s used to construct a Model
instance.
It’s also available as a property on the Model
class.
This class relays information from the Hypermode Runtime to the model implementation.
Generally, you don’t need to create ModelInfo
instances directly.
However, if you are implementing a custom model, you may wish to use a property from
this class, such as the fullName
, for model providers that require the model
name in the input request body.
We may add additional properties to this class in the future, as needed.
The name of the model from the Hypermode manifest.
The full name or identifier of the model, as defined by the model provider.