DeepSeek AI | How to Use and Install DeepSeek R1 Locally

Run DeepSeek R1 AI Locally on Windows/Linux: Install Ollama, download the model, and use offline. Integrate via API for apps. Step-by-step guide with code examples (Python/curl). Explore AI capabilities effortlessly!

DeepSeek AI | How to Use and Install DeepSeek R1 Locally

In the world of artificial intelligence, DeepSeek R1 has emerged as a powerful model that provides cutting-edge AI capabilities for various applications. If you are looking to leverage the potential of DeepSeek R1 in your projects, you might want to run it locally on your machine. This guide will walk you through the steps to install and use DeepSeek R1 both on Windows and Linux, ensuring you can experience the AI model offline, anytime you need it.

Step 1: Install Ollama

Ollama is a platform that allows you to run AI models offline on your machine, which makes it an essential tool for running DeepSeek R1. Whether you are on Windows or Linux, the installation process is straightforward.

For Windows:

  1. Visit Ollama’s official website.
  2. Download the Windows executable (.exe) file from the website.
  3. Double-click the downloaded file to initiate the installation (no terminal or command prompt is needed here!).
  4. Follow the prompts to complete the installation.

For Linux:

This script will automatically download and install Ollama on your system. Open your terminal and execute the following command to install Ollama:

curl -fsSL https://ollama.com/install.sh | sh

Once Ollama is installed, you're ready to proceed to the next step.

Step 2: Download the DeepSeek R1 Model

With Ollama installed, it's time to download the DeepSeek R1 model onto your machine. This process only takes a few minutes, depending on your internet speed.

For both Windows and Linux:

  1. Open Command Prompt (on Windows) or Terminal (on Linux).
  2. Wait for the download to complete. This typically takes 1-5 minutes, depending on your internet connection.

Run the following command to pull the DeepSeek R1 model:

ollama pull deepseek-r1:1.5b
💡
Explore more models and detailed information at: Ollama Model Library - Deepseek R1

Step 3: Run the DeepSeek R1 Model

Once the model is downloaded, you’re ready to start using it! You can easily run the DeepSeek R1 model via the Ollama platform.

For both Windows and Linux:

  1. Open Command Prompt (on Windows) or Terminal (on Linux).
  2. Now, you’re all set! You can interact with the AI model directly from your terminal. Ask questions, explore its features, and test its capabilities.

Enter the following command to start the DeepSeek R1 model:

ollama run deepseek-r1:1.5b

Step 4: Test DeepSeek R1

Once the model is running, you can ask it anything directly in the terminal. For example, you can type:

>>> "Explain quantum computing in simple terms"

The AI model will then respond with a clear and simple explanation. This makes it a versatile tool for learning, experimentation, and integrating AI into your personal projects.

Interacting with DeepSeek R1 via the Ollama API

If you want to integrate DeepSeek R1 into your application or interact with it programmatically, you can make use of Ollama's API. This allows you to send requests and receive responses from DeepSeek R1, making it easy to incorporate AI-powered features into your own software or service. Here’s how you can interact with the model through Ollama’s API:

1. Set Up Your API Environment

Ensure that Ollama is installed and set up on your system as outlined in the previous steps. You will be using the same DeepSeek R1 model, but instead of using the terminal, you'll make HTTP requests to interact with it.

2. Access the Ollama API

The Ollama API provides a simple endpoint to send requests and receive AI-generated responses. To interact with the API, you'll need to make HTTP POST requests to the appropriate URL.

3. Make a Request to the API

First, ensure your Ollama service is running in the background. You can start it by running:

ollama serve deepseek-r1:1.5b

Now, you can send a request to the API. Use a tool like curl or a library like requests in Python to send a POST request. Here's an example using curl:

curl -X POST http://localhost:11434/api/generate \
-H "Content-Type: application/json" \
-d '{"model": "deepseek-r1:1.5b", "stream": false, "prompt": "Explain quantum computing in simple terms"}'

In this example, the input is the question “Explain quantum computing in simple terms,” but you can replace it with any query or input you'd like the model to respond to.

4. Parse the API Response

  • The API will respond with a JSON object containing the model’s answer. The response will look something like this:
{
  "response": "Quantum computing uses the principles of quantum mechanics to perform calculations. Unlike classical computers, which use bits to represent data as either 0 or 1, quantum computers use quantum bits (qubits) that can represent and store information in both 0 and 1 simultaneously, thanks to superposition."
...
}

You can then extract the response field from the JSON object and display it in your application.

💡
You can explore these advanced settings in the official API documentation to tailor the model's output to your needs.

Example Code (Python)

Here’s a quick example of how to use Python’s requests library to interact with DeepSeek R1 via the API:

import requests

url = 'http://localhost:11434/api/generate'
data = {
    "model": "deepseek-r1",  # Use verified model name
    "prompt": "Explain quantum computing in simple terms",
    "stream": False  # Ensure responses are complete
}

response = requests.post(url, json=data)
if response.ok:
    print(response.json()["response"])
else:
    print("Error:", response.text)

This will send a request to the Ollama API, get a response from DeepSeek R1, and print the generated answer.

Conclusion

Running DeepSeek R1 locally on your Windows or Linux machine is quick and easy, thanks to Ollama’s user-friendly platform. With just a few simple steps, you can have access to this advanced AI model offline, whenever you need it. Whether you are looking to explore AI concepts, enhance your research, or integrate the model into your own applications, Ollama using DeepSeek R1 model provides a robust solution to get you started.