DeepSeek API - Chinese-Optimized Ultra-Low-Cost AI API

📋 Service Information

Provider: DeepSeek
Service Type: API Service
API Endpoint: https://api.deepseek.com
Free Tier: Trial Credits (¥5, 7-day validity, may vary)
API Compatibility: Fully compatible with OpenAI API


🎯 Service Overview

DeepSeek API provides powerful AI capabilities, fully compatible with OpenAI API format, priced 95%+ cheaper than GPT-4, making it ideal for developers on a budget who need high performance.

Key Advantages:

  • 💰 Ultra-low Price - 95%+ cheaper than GPT-4
  • 🔄 OpenAI Compatible - Seamless code migration
  • 🧠 Thinking Mode - Transparent reasoning process
  • 🆕 Latest V3.2 - Enhanced Agent capabilities
  • 🇨🇳 Chinese Optimized - Top-tier Chinese performance
  • 🎁 Trial Credits - New users receive ¥5 (7 days validity, may vary)

🚀 Quick Start

Prerequisites

Required:

  • ✅ Registered developer account
  • ✅ Completed real-name verification (required for API recharge)
  • ✅ API key created

For detailed steps, see: DeepSeek Registration Guide

5-Minute Quick Example

Using OpenAI SDK (Recommended)

Python
from openai import OpenAI

# Configure DeepSeek
client = OpenAI(
    api_key="YOUR_DEEPSEEK_API_KEY",
    base_url="https://api.deepseek.com"
)

# Use DeepSeek Chat
response = client.chat.completions.create(
    model="deepseek-chat",
    messages=[
        {"role": "system", "content": "You are a helpful assistant"},
        {"role": "user", "content": "Explain what deep learning is"}
    ]
)

print(response.choices[0].message.content)

🤖 Supported Models

Model List

Model IDVersionContextPrice (/million tokens)
deepseek-chatV3.2128K$0.28(in)/$0.42(out)
deepseek-reasonerV3.2128K$0.28(in)/$0.42(out)

Notes:

  • deepseek-chat: Standard mode, default output 4K (max 8K)
  • deepseek-reasoner: Thinking mode, default output 32K (max 64K)
  • Supports context caching, input price drops to $0.028/M when hit

🔢 Trial Credits and Pricing

Trial Credits

ItemDetails
Amount¥5 RMB (may vary by promotion)
Validity7 days after registration
How to GetAutomatically after registration and verification

Pricing After Recharge

All models (V3.2):

  • Input (cache miss): $0.28 / million tokens
  • Input (cache hit): $0.028 / million tokens
  • Output: $0.42 / million tokens

Exchange reference: $1 ≈ ¥7.2

Price Comparison

ModelInputOutputRelative Savings
DeepSeek V3.2$0.28$0.42Baseline
GPT-4 Turbo$10$3036-71x
GPT-4o$5$1518-36x
Claude 3.5$3$1511-36x

📖 API Usage Examples

1. Basic Conversation

Python
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_DEEPSEEK_API_KEY",
    base_url="https://api.deepseek.com"
)

response = client.chat.completions.create(
    model="deepseek-chat",
    messages=[
        {"role": "system", "content": "You are a helpful AI assistant"},
        {"role": "user", "content": "What is machine learning?"}
    ]
)

print(response.choices[0].message.content)

2. Streaming Output

Python
stream = client.chat.completions.create(
    model="deepseek-chat",
    messages=[
        {"role": "user", "content": "Write an article about AI"}
    ],
    stream=True
)

for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="", flush=True)

3. Using Reasoning Model (R1)

Python
# Use DeepSeek R1 reasoning model
response = client.chat.completions.create(
    model="deepseek-reasoner",
    messages=[
        {
            "role": "user",
            "content": "There are 100 balls. Each time, take half and put back 1. Repeat this process. How many balls remain?"
        }
    ]
)

# View reasoning process
print("=== Thinking Process ===")
print(response.choices[0].message.reasoning_content)

# View final answer
print("\n=== Final Answer ===")
print(response.choices[0].message.content)

4. Code Generation (Coder)

Python
response = client.chat.completions.create(
    model="deepseek-coder",
    messages=[
        {
            "role": "system",
            "content": "You are a professional programming assistant"
        },
        {
            "role": "user",
            "content": "Implement quicksort algorithm in Python"
        }
    ]
)

print(response.choices[0].message.content)

5. cURL Example

Bash
curl https://api.deepseek.com/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_DEEPSEEK_API_KEY" \
  -d '{
    "model": "deepseek-chat",
    "messages": [
      {
        "role": "system",
        "content": "You are a helpful assistant"
      },
      {
        "role": "user",
        "content": "Hello, introduce yourself"
      }
    ]
  }'

6. Multi-turn Conversation

Python
messages = [
    {"role": "system", "content": "You are a Python programming assistant"}
]

# First round
messages.append({"role": "user", "content": "How to read a file?"})
response = client.chat.completions.create(
    model="deepseek-chat",
    messages=messages
)
messages.append({"role": "assistant", "content": response.choices[0].message.content})

# Second round
messages.append({"role": "user", "content": "What about writing?"})
response = client.chat.completions.create(
    model="deepseek-chat",
    messages=messages
)
print(response.choices[0].message.content)

💡 Best Practices

✅ Recommended Practices

  1. Choose the Right Model

    # General tasks - cheapest
    model = "deepseek-chat"
    
    # Code tasks - programming expert
    model = "deepseek-coder"
    
    # Complex reasoning - math, logic
    model = "deepseek-reasoner"
  2. Error Handling and Retries

    import time
    from openai import OpenAI, APIError
    
    def call_with_retry(messages, max_retries=3):
        for i in range(max_retries):
            try:
                return client.chat.completions.create(
                    model="deepseek-chat",
                    messages=messages
                )
            except APIError as e:
                if i < max_retries - 1:
                    wait_time = 2 ** i
                    print(f"API error, waiting {wait_time} seconds...")
                    time.sleep(wait_time)
                else:
                    raise
  3. Monitor Usage

    # Visit https://platform.deepseek.com
    # Billing Center → View detailed usage statistics
  4. Securely Manage API Keys

    import os
    from dotenv import load_dotenv
    
    load_dotenv()
    api_key = os.getenv('DEEPSEEK_API_KEY')
    
    client = OpenAI(
        api_key=api_key,
        base_url="https://api.deepseek.com"
    )

🔧 Common Issues

1. How to Complete Real-name Verification?

Steps:

  1. Login to https://platform.deepseek.com
  2. Avatar → Account Settings → Real-name Verification
  3. Upload ID card (front and back)
  4. Wait for review (1-24 hours)

2. What When Trial Credits Run Out?

Solution:

  • Can recharge to continue using
  • Extremely low price (¥1-2/M tokens)
  • Supports Alipay, WeChat Pay

3. What Payment Methods Are Supported?

Supported:

  • Alipay
  • WeChat Pay
  • Corporate transfer

4. How to Check Balance?

Method:

  • Login to console
  • Billing Center → Balance Inquiry

📊 Performance Optimization

1. Choose Appropriate Model

Python
def select_model(task_type):
    if task_type == "code":
        return "deepseek-coder"
    elif task_type == "reasoning":
        return "deepseek-reasoner"
    else:
        return "deepseek-chat"  # Cheapest

2. Result Caching

Python
import hashlib
import json

cache = {}

def cached_completion(model, messages):
    key = hashlib.md5(
        json.dumps({"model": model, "messages": messages}).encode()
    ).hexdigest()
    
    if key in cache:
        return cache[key]
    
    response = client.chat.completions.create(
        model=model,
        messages=messages
    )
    
    cache[key] = response
    return response

📚 Related Resources

Official Documentation

Tools and Resources


🌟 Practical Cases

Case 1: Academic Paper Assistant

Python
def analyze_paper(paper_text):
    """Analyze academic paper"""
    response = client.chat.completions.create(
        model="deepseek-chat",
        messages=[
            {
                "role": "system",
                "content": "You are an academic paper analysis expert"
            },
            {
                "role": "user",
                "content": f"Please analyze the core contributions and innovations of the following paper:\n\n{paper_text}"
            }
        ]
    )
    return response.choices[0].message.content

Case 2: Code Review Assistant

Python
def code_review(code):
    """Code review and optimization suggestions"""
    response = client.chat.completions.create(
        model="deepseek-coder",
        messages=[
            {
                "role": "system",
                "content": "You are a code review expert"
            },
            {
                "role": "user",
                "content": f"Please review the following code:\n\n```python\n{code}\n```"
            }
        ]
    )
    return response.choices[0].message.content

Case 3: Math Problem Solver

Python
def solve_math_problem(problem):
    """Use R1 model to solve math problems"""
    response = client.chat.completions.create(
        model="deepseek-reasoner",
        messages=[
            {
                "role": "user",
                "content": f"Please solve the following math problem: {problem}"
            }
        ]
    )
    
    print("=== Thinking Process ===")
    print(response.choices[0].message.reasoning_content)
    print("\n=== Final Answer ===")
    print(response.choices[0].message.content)

Service Provider: DeepSeek

Last updated on