Using DeepSeek
Use the DeepSeek-R1 Model with your Modus app
DeepSeek-R1
is an open source AI reasoning model that rivals the performance
of frontier models such as OpenAI’s o1 in complex reasoning tasks like math and
coding. Benefits of DeepSeek include:
- Performance:
DeepSeek-R1
achieves comparable results to OpenAI’s o1 model on several benchmarks. - Efficiency: The model uses significantly fewer parameters and therefore operates at a lower cost relative to competing frontier models.
- Open Source: The open source license allows both commercial and non-commercial usage of the model weights and associated code.
- Novel training approach: The research team developed DeepSeek-R1 through a multi-stage approach that combines reinforcement learning, fine-tuning, and data distillation.
- Distilled versions: The DeepSeek team released smaller, distilled models based on DeepSeek-R1 that offer high reasoning capabilities with fewer parameters.
In this guide we review how to use the DeepSeek-R1
model in your Modus app.
Options for using DeepSeek with Modus
There are two options for using DeepSeek-R1
in your Modus app:
-
Use the distilled
DeepSeek-R1
model hosted by Hypermode Hypermode hosts and makes available the distilled DeepSeek model based onLlama-3.1-8B
enabling Modus apps to use it in both local development environments and deployed applications. -
Use the DeepSeek Platform API with your Modus app Access DeepSeek models hosted on the DeepSeek platform by configuring a DeepSeek connection in your Modus app and using your DeepSeek API key
Using the distilled DeepSeek model hosted by Hypermode
The open source DeepSeek-R1-Distill-Llama-8B
DeepSeek model is available on
Hypermode as a shared model. This means that we
can invoke this model in a Modus app in both a local development environment and
also in an app deployed on Hypermode.
The DeepSeek-R1-Distill-Llama-8B
model is a distilled version of the
DeepSeek-R1 model which has been fine-tuned using the Llama-3.1-8B
model as a
base model, using samples generated by DeepSeek-R1.
Distilled models offer similar high reasoning capabilities with fewer parameters.
Create a Modus app
If you haven’t already, create a new modus app. Skip this step if you already have a Modus app.
See the Modus Quickstart for more information about creating Modus projects.
Add the DeepSeek model to your app manifest
Update your Modus app manifest modus.json
file to specify the
DeepSeek-R1-Distill-Llama-8B
model hosted on Hypermode.
Note that we named the model deepseek-reasoner
in our app manifest, which we
use to access the model in our Modus function.
Use the Hyp CLI to sign in to Hypermode
To use Hypermode hosted models in our local development environment we use the
hyp
CLI to log in to Hypermode.
Install the hyp
CLI if not previously installed.
Log into Hypermode using the command:
If signing in for the first time, Hypermode prompts to create an account and specify an organization.
Write a function to invoke the model
You can now invoke the model in your Modus app’s functions using the Modus models interface.
Here we write a function that takes a prompt as input, invokes the DeepSeek model, and returns the generated text as output. At runtime this function becomes a GraphQL Query field in the GraphQL API generated by Modus.
DeepSeek recommends not using a system prompt with DeepSeek-R1 and setting the
temperature
parameter in the range of 0.5-0.7
Run your Modus app
Run your Modus app locally:
This command compiles your Modus app and starts a local GraphQL API endpoint.
Query your function in the Modus API Explorer
Open the Modus API Explorer in your web browser at
http://localhost:8686/explorer
.
Add your prompt as an input argument for the generateText
query field and
select “Run” to invoke the DeepSeek model.
For mathematical problems,use a directive in your prompt such as: “Please
reason step by step, and include your final answer within \boxed{}.
”
This example demonstrated how to use the DeepSeek-R1 Distilled Model hosted by Hypermode in a Modus app to create an endpoint that returns text generated by the DeepSeek model. More advanced use cases for leveraging the DeepSeek reasoning models in your Modus app include workflows like tool use / function calling, generating structured outputs, problem solving, code generation, and much more. Let us know how you’re leveraging DeepSeek with Modus.
Using the DeepSeek platform API with Modus
This option involves using the DeepSeek models hosted by DeepSeek Platform with your Modus app. You’ll need to create an account with DeepSeek Platform and pay for your model usage.
Create a DeepSeek API key
Create an account with DeepSeek Platform.
Once you’ve signed in select the “API keys” tab and “Create new API token” to generate your DeepSeek Platform API token.
Create a Modus app
If you haven’t already, create a new modus app. Skip this step if you already have a Modus app.
See the Modus Quickstart for more information about creating Modus projects.
Define the model and connection in your app manifest
Update your Modus app’s modus.json
app manifest file to include the DeepSeek
model and a connection for the DeepSeek Platform API. Use deepseek-reasoner
as
the value for sourceModel
for the DeepSeek-R1 reasoning model and
deepseek-chat
for the DeepSeek-V3 model.
At query time, Modus replaces the {{API_TOKEN}}
secret placeholder with your MODUS_DEEPSEEK_API_TOKEN
environment variable value.
Set this environment variable value in the next step.
Create environment variable for your API token
Edit the .env.dev.local
file to declare an environment variable for your DeepSeek Platform API key.
Modus namespaces environment variables for secrets placeholders with MODUS
and
your connection’s name from the app manifest. For more details on using secrets
in Modus, refer to
working locally with secrets.
Write a function to invoke the DeepSeek model
You can now invoke the model in your Modus app’s functions using the Modus models interface.
Here we write a function that takes a prompt as input, invokes the DeepSeek model, and returns the generated text as output. At runtime this function becomes a GraphQL Query field in the GraphQL API generated by Modus.
DeepSeek recommends not using a system prompt with DeepSeek-R1 and setting the
temperature
parameter in the range of 0.5-0.7
Run your Modus app
Run your Modus app locally:
This command compiles your Modus app and starts a local GraphQL API endpoint.
Query using the Modus API Explorer
Open the Modus API Explorer in your web browser at http://localhost:8686/explorer
.
Add your prompt as an input argument for the generateText
query field and
select “Run” to invoke the DeepSeek model.
For mathematical problems,use a directive in your prompt such as: “Please
reason step by step, and include your final answer within \boxed{}.
”
This example demonstrated how to use the DeepSeek Platform API in a Modus app to create an endpoint that returns text generated by the DeepSeek-R1 model. More advanced use cases for leveraging the DeepSeek reasoning models in your Modus app include workflows like tool use / function calling, generating structured outputs, problem solving, code generation, and much more. Let us know how you’re leveraging DeepSeek with Modus.
Resources
Was this page helpful?