In this quickstart we’ll show you how to get set up with Modus and its CLI and build a simple app that fetches a random quote from an external API. You’ll learn how to use the basic components of a Modus app and how to run it locally.

Prerequisites

  • Node.js - v22 or higher
  • Text editor - we recommend VS Code
  • Terminal - access Modus through a command-line interface (CLI)

Building your first Modus app

1

Install the Modus CLI

The Modus CLI provides a set of commands to help you create, build, and run your Modus apps. Install the CLI using npm.

npm install -g @hypermode/modus-cli
2

Initialize your Modus app

To create a new Modus app, run the following command in your terminal:

modus new

This command prompts you to choose between Go and AssemblyScript as the language for your app. It then creates a new directory with the necessary files and folders for your app. You will also be asked if you would like to initialize a Git repository.

3

Build and run your app

To build and run your app, navigate to the app directory and run the following command:

modus dev

This command runs your app locally in development mode and provides you with a URL to access your app’s generated API.

4

Access your local endpoint

Once your app is running, you can access the graphical interface for your API at the URL located in your terminal.

View endpoint: http://localhost:8686/explorer

The API Explorer interface allows you to interact with your app’s API and test your functions.

5

Add a connection

Modus is a secure-by-default framework. To connect to external services, you need to add a connection in your app manifest.

Add the following code into your modus.json manifest file:

modus.json
{
  "connections": {
    "zenquotes": {
      "type": "http",
      "baseUrl": "https://zenquotes.io/"
    }
  }
}
6

Add a model

Modus also supports AI models. You can define new models in your modus.json file. Let’s add a new meta-llama model:

"models": {
    "text-generator": {
      "sourceModel": "meta-llama/Llama-3.2-3B-Instruct",
      "provider": "hugging-face",
      "connection": "hypermode"
    }
  },
7

Install the Hyp CLI and log in

Next, install the Hyp CLI. This allows you to access hosted models on the Hypermode platform.

npm install -g @hypermode/hyp-cli

You can now use the hyp login command to log in to the Hyp CLI. This links your project to the Hypermode platform, allowing you to leverage the model in your modus app.

8

Add a function with AI integration

Functions are the building blocks of your app. Let’s add a function that fetches a random quote from the ZenQuotes connection and uses AI to generate a summary for the quote.

Create a new file in the root directory with the following code:

quotes.go
package main

import (
  "errors"
  "fmt"
  "strings"

  "github.com/hypermodeinc/modus/sdk/go/pkg/http"
  "github.com/hypermodeinc/modus/sdk/go/pkg/models"
  "github.com/hypermodeinc/modus/sdk/go/pkg/models/openai"
)

type Quote struct {
  Quote   string `json:"q"`
  Author  string `json:"a"`
  Summary string `json:"summary,omitempty"`
}

const modelName = "text-generator"

// this function makes a request to an API that returns data in JSON format,
// and returns a single quote with AI-generated summary
func GetRandomQuote() (*Quote, error) {
  request := http.NewRequest("https://zenquotes.io/api/random")

  response, err := http.Fetch(request)
  if err != nil {
    return nil, err
  }
  if !response.Ok() {
    return nil, fmt.Errorf("failed to fetch quote. Received: %d %s", response.Status, response.StatusText)
  }

  // the API returns an array of quotes, but we only need the first one
  var quotes []Quote
  response.JSON(&quotes)
  if len(quotes) == 0 {
    return nil, errors.New("expected at least one quote in the response, but none were found")
  }

  // Get the first (and only) quote
  quote := quotes[0]

  // Generate AI summary for the quote
  summary, err := summarizeQuote(quote.Quote, quote.Author)
  if err != nil {
    fmt.Printf("Warning: failed to summarize quote by %s: %v\n", quote.Author, err)
    quote.Summary = "Summary unavailable"
  } else {
    quote.Summary = summary
  }

  return &quote, nil
}

// summarizeQuote uses the AI model to generate a concise summary of the quote
func summarizeQuote(quote, author string) (string, error) {
  model, err := models.GetModel[openai.ChatModel](modelName)
  if err != nil {
    return "", err
  }

  instruction := "Provide a brief, insightful summary that captures the essence and meaning of the quote in 1-2 sentences."
  prompt := fmt.Sprintf("Quote: \"%s\" - %s", quote, author)

  input, err := model.CreateInput(
    openai.NewSystemMessage(instruction),
    openai.NewUserMessage(prompt),
  )
  if err != nil {
    return "", err
  }

  // Set temperature for consistent but creative responses
  input.Temperature = 0.7

  output, err := model.Invoke(input)
  if err != nil {
    return "", err
  }

  return strings.TrimSpace(output.Choices[0].Message.Content), nil
}
9

Make your first AI call

Now that you’ve integrated the AI model, let’s test it! After adding your function, restart your development server:

modus dev

Navigate to the API Explorer at http://localhost:8686/explorer and you’ll see your randomQuote function available to test.

When you call the function, you’ll notice that the quote includes three fields:

  • quote: The original quote text
  • author: The author’s name
  • summary: An AI-generated summary that captures the essence of the quote

The AI model analyzes the quote and provides insightful context about its meaning, making your app more engaging and informative for users.

Try calling the function multiple times to see how the AI generates different summaries for various quotes!

10

Track local model inferences

When testing an AI app locally, Modus records the inference and related metadata in the View Inferences tab of the APIs explorer.

Local model tracing is only supported on Linux and macOS. Windows support is coming soon.

You can now see detailed information about each AI model call, including:

  • Input prompts sent to the model
  • Generated responses
  • Performance metrics like response time
  • Token usage and costs

For more inspiration, check out the Modus recipes.