OpenAI API實戰調用

OpenAI API實戰調用

概述與背景

OpenAI API為開發者提供了強大的AI能力接入途徑,讓你能夠在自己的應用中集成GPT系列模型的智能。無論是構建聊天機器人、內容生成工具,還是智能分析系統,OpenAI API都能提供堅實的技術支撐。

本指南將從實戰角度出發,系統性地介紹OpenAI API的調用方法、最佳實踐和常見問題解決方案。

graph TB
    subgraph OpenAI API架構
        A[用戶應用] --> B[OpenAI SDK]
        B --> C[API Gateway]
        C --> D[GPT-4o]
        C --> E[GPT-4o-mini]
        C --> F[o1-preview]
        C --> G[其他模型]
        
        D --> H[回應處理]
        E --> H
        F --> H
        G --> H
        
        H --> A
    end

    style A fill:#e1f5fe
    style B fill:#fff3e0
    style C fill:#f3e5f5
    style D fill:#c8e6c9
    style E fill:#c8e6c9
    style F fill:#c8e6c9
    style G fill:#c8e6c9
    style H fill:#fff9c4

本文你能學到

  • ✅ 完整的API調用流程(基礎到進階)
  • ✅ Assistant API的使用方法
  • ✅ 錯誤處理與重試機制
  • ✅ 成本優化策略與價格對比
  • ✅ 3個完整項目案例
  • ✅ 安全防護最佳實踐

模型選擇與價格對比

模型適用場景輸入價格輸出價格特點
gpt-4o複雜推理、多模態$2.5/1M tokens$10/1M tokens最強能力
gpt-4o-mini日常任務$0.15/1M tokens$0.6/1M tokens性價比最高
o1-preview深度推理$15/1M tokens$60/1M tokens最強推理
o1-mini數學、編程$3/1M tokens$12/1M tokens快速推理
gpt-4-turbo長文本處理$10/1M tokens$30/1M tokens128K上下文

選型建議

簡單分類/摘要 → gpt-4o-mini(省錢)
複雜推理任務 → gpt-4o(平衡)
數學/編程問題 → o1-mini(專項)
需要思考時間 → o1-preview(深度)

環境準備

安裝SDK

pip install openai tiktoken python-dotenv

API密鑰配置

獲取API Key後,建議使用環境變量存儲:

# 創建.env文件
cat > .env << 'ENVEOF'
OPENAI_API_KEY=sk-your-api-key-here
OPENAI_ORG_ID=org-your-org-id  # 可選
ENVEOF

# 添加到.gitignore
echo ".env" >> .gitignore

在代碼中安全加載:

import os
from dotenv import load_dotenv
from openai import OpenAI

load_dotenv()
client = OpenAI(
    api_key=os.getenv('OPENAI_API_KEY'),
    organization=os.getenv('OPENAI_ORG_ID')  # 可選
)

實戰步驟

graph TD
    A[環境準備] --> B[安裝SDK]
    B --> C[配置API密鑰]
    C --> D[基礎調用測試]
    D --> E{需求類型?}
    
    E -->|簡單對話| F[Chat Completions]
    E -->|流式輸出| G[Stream Mode]
    E -->|多輪對話| H[Session管理]
    E -->|文件處理| I[Assistant API]
    
    F --> J[錯誤處理]
    G --> J
    H --> J
    I --> J
    
    J --> K[成本優化]
    K --> L[生產部署]

    style A fill:#fff9c4
    style E fill:#e1f5fe
    style J fill:#ffcdd2
    style L fill:#c8e6c9
sequenceDiagram
    participant App as 應用程序
    participant SDK as OpenAI SDK
    participant API as OpenAI API
    participant Model as GPT模型

    Note over App,Model: 基礎調用流程
    App->>SDK: client.chat.completions.create()
    SDK->>API: HTTPS POST請求
    API->>Model: 模型推理
    Model-->>API: 生成結果
    API-->>SDK: JSON回應
    SDK-->>App: 返回response對象

    Note over App,Model: 流式輸出流程
    App->>SDK: stream=True
    SDK->>API: 建立流式連接
    loop 持續推送
        API-->>SDK: chunk數據
        SDK-->>App: 增量內容
    end
    API-->>SDK: [DONE]標記
    SDK-->>App: 完成

第一步:基礎文本生成

1.1 簡單對話調用

from openai import OpenAI

client = OpenAI()

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "你是一位專業的軟件架構師。"},
        {"role": "user", "content": "解釋一下微服務架構的核心概念。"}
    ]
)

print(response.choices[0].message.content)
print(f"Token使用: {response.usage}")

1.2 流式輸出

對於長文本生成,使用流式輸出提升用戶體驗:

def stream_chat(prompt: str, model: str = "gpt-4o"):
    """流式輸出示例"""
    stream = client.chat.completions.create(
        model=model,
        messages=[{"role": "user", "content": prompt}],
        stream=True
    )
    
    full_content = ""
    for chunk in stream:
        if chunk.choices[0].delta.content is not None:
            content = chunk.choices[0].delta.content
            print(content, end="", flush=True)
            full_content += content
    
    print()  # 換行
    return full_content

# 使用
result = stream_chat("寫一篇關於人工智能發展的文章")

1.3 參數控制

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": "生成10個創業點子"}
    ],
    temperature=0.8,      # 創造性 (0-2)
    max_tokens=1000,      # 最大輸出長度
    top_p=0.9,           # 核採樣
    frequency_penalty=0.5, # 減少重複
    presence_penalty=0.3,  # 鼓勵新話題
    stop=["---", "END"]    # 停止詞
)

第二步:多輪對話管理

2.1 對話類封裝

from dataclasses import dataclass, field
from typing import List, Dict
import json

@dataclass
class ChatSession:
    """完整的對話會話管理"""
    system_prompt: str = "你是一位有幫助的AI助手。"
    model: str = "gpt-4o-mini"
    messages: List[Dict] = field(default_factory=list)
    max_history: int = 20  # 保留最近20條消息
    
    def __post_init__(self):
        self.messages = [{"role": "system", "content": self.system_prompt}]
    
    def chat(self, user_input: str) -> str:
        """發送消息並獲取回覆"""
        self.messages.append({"role": "user", "content": user_input})
        
        response = client.chat.completions.create(
            model=self.model,
            messages=self.messages
        )
        
        assistant_message = response.choices[0].message.content
        self.messages.append({"role": "assistant", "content": assistant_message})
        
        # 限制歷史長度(保留系統消息)
        if len(self.messages) > self.max_history + 1:
            self.messages = [self.messages[0]] + self.messages[-(self.max_history):]
        
        return assistant_message
    
    def clear(self):
        """清空對話歷史"""
        self.messages = [{"role": "system", "content": self.system_prompt}]
    
    def export(self) -> str:
        """導出對話"""
        return json.dumps(self.messages, ensure_ascii=False, indent=2)
    
    def import_session(self, history: str):
        """導入對話"""
        self.messages = json.loads(history)

# 使用示例
session = ChatSession(
    system_prompt="你是一位資深Python開發者,擅長解答編程問題。",
    model="gpt-4o-mini"
)

print(session.chat("如何實現單例模式?"))
print(session.chat("那線程安全的單例呢?"))
print(f"對話長度: {len(session.messages)}")

2.2 Token計算與限制

import tiktoken

def count_tokens(messages: List[Dict], model: str = "gpt-4o") -> int:
    """精確計算消息的token數"""
    encoding = tiktoken.encoding_for_model(model)
    
    # 每條消息的固定開銷
    tokens_per_message = 4  # <|start|>role<|end|>content<|end|>
    tokens_per_name = 1     # name字段(如果有)
    
    total_tokens = 0
    for message in messages:
        total_tokens += tokens_per_message
        for key, value in message.items():
            total_tokens += len(encoding.encode(str(value)))
            if key == "name":
                total_tokens += tokens_per_name
    
    total_tokens += 2  # 對話結束標記
    return total_tokens

# 使用
messages = [
    {"role": "system", "content": "你是一位AI助手。"},
    {"role": "user", "content": "你好"}
]
print(f"預估token數: {count_tokens(messages)}")

第三步:高級功能

3.1 函數調用(Function Calling)

import json
from datetime import datetime

# 定義可用工具
tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "獲取指定城市的當前天氣",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "城市名稱,如:北京、上海"
                    },
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "溫度單位"
                    }
                },
                "required": ["city"]
            }
        }
    },
    {
        "type": "function",
        "function": {
            "name": "get_current_time",
            "description": "獲取當前時間",
            "parameters": {
                "type": "object",
                "properties": {
                    "timezone": {
                        "type": "string",
                        "description": "時區,如:Asia/Shanghai"
                    }
                }
            }
        }
    }
]

# 實現工具函數
def get_weather(city: str, unit: str = "celsius") -> dict:
    """模擬天氣API"""
    # 實際應用中調用真實的天氣API
    weather_data = {
        "北京": {"temp": 22, "condition": "晴"},
        "上海": {"temp": 26, "condition": "多雲"},
    }
    data = weather_data.get(city, {"temp": 20, "condition": "未知"})
    return {
        "city": city,
        "temperature": data["temp"],
        "unit": unit,
        "condition": data["condition"]
    }

def get_current_time(timezone: str = "Asia/Shanghai") -> str:
    """獲取當前時間"""
    from datetime import datetime
    import pytz
    tz = pytz.timezone(timezone)
    return datetime.now(tz).strftime("%Y-%m-%d %H:%M:%S")

# 函數映射
function_map = {
    "get_weather": get_weather,
    "get_current_time": get_current_time
}

# 調用流程
def chat_with_tools(user_input: str) -> str:
    """帶工具調用的對話"""
    messages = [{"role": "user", "content": user_input}]
    
    # 第一次調用:可能觸發工具
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=messages,
        tools=tools
    )
    
    message = response.choices[0].message
    
    # 檢查是否需要調用工具
    if message.tool_calls:
        # 執行工具
        tool_results = []
        for tool_call in message.tool_calls:
            function_name = tool_call.function.name
            arguments = json.loads(tool_call.function.arguments)
            
            result = function_map[function_name](**arguments)
            tool_results.append({
                "tool_call_id": tool_call.id,
                "role": "tool",
                "name": function_name,
                "content": json.dumps(result, ensure_ascii=False)
            })
        
        # 添加工具結果並再次調用
        messages.append(message)
        messages.extend(tool_results)
        
        response = client.chat.completions.create(
            model="gpt-4o",
            messages=messages
        )
        
        return response.choices[0].message.content
    
    return message.content

# 使用
print(chat_with_tools("北京現在天氣怎麼樣?現在幾點了?"))

3.2 圖像理解(Vision API)

def analyze_image(image_url: str, question: str) -> str:
    """分析圖片內容"""
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                "role": "user",
                "content": [
                    {"type": "text", "text": question},
                    {
                        "type": "image_url",
                        "image_url": {
                            "url": image_url,
                            "detail": "high"  # low/auto/high
                        }
                    }
                ]
            }
        ],
        max_tokens=1000
    )
    return response.choices[0].message.content

# 使用
result = analyze_image(
    "https://example.com/screenshot.png",
    "這個網頁的UI有哪些問題?請給出具體改進建議。"
)

# 本地圖片
import base64
def analyze_local_image(image_path: str, question: str) -> str:
    """分析本地圖片"""
    with open(image_path, "rb") as f:
        image_data = base64.b64encode(f.read()).decode('utf-8')
    
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {
                "role": "user",
                "content": [
                    {"type": "text", "text": question},
                    {
                        "type": "image_url",
                        "image_url": {
                            "url": f"data:image/jpeg;base64,{image_data}"
                        }
                    }
                ]
            }
        ]
    )
    return response.choices[0].message.content

3.3 JSON結構化輸出

from pydantic import BaseModel
from typing import List, Optional

class Person(BaseModel):
    name: str
    age: Optional[int] = None
    occupation: str

class Article(BaseModel):
    title: str
    author: Person
    tags: List[str]
    summary: str
    key_points: List[str]

# 使用response_format
response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "user", "content": """
分析以下文章,提取結構化信息:

[文章內容...]

請返回JSON格式,包含:title, author, tags, summary, key_points
        """}
    ],
    response_format={"type": "json_object"}
)

import json
data = json.loads(response.choices[0].message.content)
article = Article(**data)
print(article.model_dump_json(indent=2))

第四步:Assistant API

4.1 創建助手

def create_assistant():
    """創建專門的助手"""
    assistant = client.beta.assistants.create(
        name="代碼審查助手",
        instructions="""你是一位資深代碼審查專家。
        
你的職責:
1. 審查代碼質量(風格、性能、安全)
2. 發現潛在bug
3. 提供改進建議

審查標準:
- 代碼風格遵循PEP8(Python)或標準規範
- 性能問題關注時間/空間複雜度
- 安全問題關注注入、敏感數據等
        """,
        model="gpt-4o",
        tools=[{"type": "code_interpreter"}]
    )
    return assistant

# 創建線程並對話
def chat_with_assistant(assistant_id: str, user_message: str) -> str:
    """與助手對話"""
    # 創建線程
    thread = client.beta.threads.create()
    
    # 添加消息
    client.beta.threads.messages.create(
        thread_id=thread.id,
        role="user",
        content=user_message
    )
    
    # 運行助手
    run = client.beta.threads.runs.create_and_poll(
        thread_id=thread.id,
        assistant_id=assistant_id
    )
    
    # 獲取回覆
    messages = client.beta.threads.messages.list(
        thread_id=thread.id
    )
    
    return messages.data[0].content[0].text.value

# 使用
assistant = create_assistant()
result = chat_with_assistant(
    assistant.id,
    "請審查這段Python代碼並給出改進建議:\n```python\ndef calc(x): return x*x\n```"
)

第五步:錯誤處理與重試

import time
import logging
from functools import wraps
from openai import (
    APIError, 
    RateLimitError, 
    APIConnectionError,
    AuthenticationError,
    BadRequestError
)

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

def retry_on_error(max_retries: int = 3, backoff_factor: float = 1.0):
    """API調用重試裝飾器"""
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            last_exception = None
            for attempt in range(max_retries):
                try:
                    return func(*args, **kwargs)
                except RateLimitError as e:
                    wait_time = backoff_factor * (2 ** attempt)
                    logger.warning(f"速率限制,等待{wait_time}秒後重試...")
                    time.sleep(wait_time)
                except APIConnectionError as e:
                    logger.warning(f"連接錯誤,嘗試第{attempt + 1}次重試...")
                    time.sleep(1)
                except AuthenticationError as e:
                    logger.error("認證失敗,請檢查API密鑰")
                    raise
                except BadRequestError as e:
                    logger.error(f"請求錯誤: {e}")
                    raise
                except APIError as e:
                    logger.error(f"API錯誤: {e}")
                    last_exception = e
            
            raise Exception(f"達到最大重試次數: {last_exception}")
        return wrapper
    return decorator

@retry_on_error(max_retries=3, backoff_factor=2.0)
def safe_chat(messages: List[Dict], model: str = "gpt-4o-mini") -> str:
    """安全的聊天調用"""
    response = client.chat.completions.create(
        model=model,
        messages=messages
    )
    return response.choices[0].message.content

三個完整項目案例

案例1:智能客服機器人

import json
from typing import Dict, List
from dataclasses import dataclass, field

@dataclass
class CustomerServiceBot:
    """智能客服機器人"""
    company_name: str
    knowledge_base: Dict = field(default_factory=dict)
    conversation_history: List[Dict] = field(default_factory=list)
    
    def __post_init__(self):
        self.system_prompt = f"""你是{self.company_name}的客服代表。

你的職責:
1. 友好、專業地回答客戶問題
2. 準確提供產品信息
3. 處理投訴和建議

公司知識庫:
{json.dumps(self.knowledge_base, ensure_ascii=False, indent=2)}

回答原則:
- 先確認理解客戶需求
- 提供準確、有用的信息
- 無法回答時誠實告知並提供替代方案
- 保持禮貌和耐心
"""
    
    def chat(self, user_input: str) -> str:
        """處理用戶消息"""
        messages = [
            {"role": "system", "content": self.system_prompt}
        ] + self.conversation_history + [
            {"role": "user", "content": user_input}
        ]
        
        response = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=messages,
            temperature=0.7
        )
        
        reply = response.choices[0].message.content
        
        # 保存對話歷史
        self.conversation_history.extend([
            {"role": "user", "content": user_input},
            {"role": "assistant", "content": reply}
        ])
        
        # 限制歷史長度
        if len(self.conversation_history) > 20:
            self.conversation_history = self.conversation_history[-20:]
        
        return reply
    
    def analyze_sentiment(self) -> Dict:
        """分析對話情感"""
        if not self.conversation_history:
            return {"sentiment": "neutral", "confidence": 0}
        
        response = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[
                {"role": "system", "content": "分析以下對話的客戶情感傾向,返回JSON格式。"},
                {"role": "user", "content": json.dumps(self.conversation_history, ensure_ascii=False)}
            ],
            response_format={"type": "json_object"}
        )
        
        return json.loads(response.choices[0].message.content)

# 使用示例
knowledge = {
    "產品": ["AI助手", "數據分析平台", "自動化工具"],
    "價格": {"AI助手": "¥99/月", "數據分析平台": "¥299/月"},
    "退款政策": "7天內無理由退款"
}

bot = CustomerServiceBot(
    company_name="開源智庫",
    knowledge_base=knowledge
)

print(bot.chat("你們有哪些產品?"))
print(bot.chat("AI助手多少錢?"))
print(bot.analyze_sentiment())

案例2:代碼審查系統

import subprocess
import json
from typing import Dict, List

class CodeReviewer:
    """AI代碼審查系統"""
    
    def __init__(self):
        self.system_prompt = """你是一位資深代碼審查專家。

審查維度:
1. 代碼風格(命名、格式、註釋)
2. 潛在bug(空指針、邊界條件、類型錯誤)
3. 性能問題(時間/空間複雜度、不必要的操作)
4. 安全問題(注入、敏感數據、權限)
5. 可維護性(結構、解耦、測試)

輸出JSON格式:
{
  "score": 8,
  "issues": [
    {"line": 10, "severity": "high", "type": "security", "message": "...", "suggestion": "..."}
  ],
  "summary": "...",
  "recommendations": ["..."]
}
"""
    
    def review_file(self, file_path: str, language: str = "python") -> Dict:
        """審查單個文件"""
        with open(file_path, 'r') as f:
            code = f.read()
        
        return self.review_code(code, language)
    
    def review_code(self, code: str, language: str = "python") -> Dict:
        """審查代碼"""
        response = client.chat.completions.create(
            model="gpt-4o",
            messages=[
                {"role": "system", "content": self.system_prompt},
                {"role": "user", "content": f"""
語言:{language}

代碼:
```{language}
{code}

請進行代碼審查,返回JSON格式的結果。 """} ], response_format={“type”: “json_object”} )

    return json.loads(response.choices[0].message.content)

def review_diff(self, diff_content: str) -> Dict:
    """審查Git diff"""
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": self.system_prompt},
            {"role": "user", "content": f"""

以下是Git diff內容,請審查這些變更:

{diff_content}

重點關注:

  1. 新增代碼的質量
  2. 是否引入新的問題
  3. 變更是否合理

返回JSON格式結果。 """} ], response_format={“type”: “json_object”} )

    return json.loads(response.choices[0].message.content)

def generate_report(self, results: List[Dict]) -> str:
    """生成審查報告"""
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": "根據審查結果生成易讀的報告。"},
            {"role": "user", "content": json.dumps(results, ensure_ascii=False)}
        ]
    )
    return response.choices[0].message.content

使用

reviewer = CodeReviewer() result = reviewer.review_file(“src/main.py”) print(f”評分: {result[‘score’]}/10”) print(f”問題數: {len(result[‘issues’])}“)


### 案例3:數據分析助手

```python
import pandas as pd
import json
from typing import Dict, Any

class DataAnalysisAssistant:
    """AI數據分析助手"""
    
    def __init__(self):
        self.conversation = []
    
    def analyze_dataframe(self, df: pd.DataFrame, question: str) -> str:
        """分析DataFrame"""
        # 生成數據概覽
        overview = {
            "shape": df.shape,
            "columns": list(df.columns),
            "dtypes": df.dtypes.astype(str).to_dict(),
            "head": df.head(5).to_dict(orient='records'),
            "describe": df.describe().to_dict()
        }
        
        response = client.chat.completions.create(
            model="gpt-4o",
            messages=[
                {"role": "system", "content": """你是一位數據分析專家。
根據提供的數據概覽,回答用戶的問題。
如果需要計算,請提供Python代碼。"""},
                {"role": "user", "content": f"""
數據概覽:
{json.dumps(overview, ensure_ascii=False, indent=2, default=str)}

問題:{question}
                """}
            ]
        )
        
        return response.choices[0].message.content
    
    def suggest_visualization(self, df: pd.DataFrame) -> Dict:
        """推薦可視化方案"""
        overview = {
            "shape": df.shape,
            "columns": list(df.columns),
            "dtypes": df.dtypes.astype(str).to_dict()
        }
        
        response = client.chat.completions.create(
            model="gpt-4o",
            messages=[
                {"role": "system", "content": "根據數據特徵推薦合適的可視化方案,返回JSON。"},
                {"role": "user", "content": json.dumps(overview, ensure_ascii=False)}
            ],
            response_format={"type": "json_object"}
        )
        
        return json.loads(response.choices[0].message.content)
    
    def generate_code(self, df: pd.DataFrame, task: str) -> str:
        """生成分析代碼"""
        columns_info = {
            "columns": list(df.columns),
            "dtypes": df.dtypes.astype(str).to_dict()
        }
        
        response = client.chat.completions.create(
            model="gpt-4o",
            messages=[
                {"role": "system", "content": "生成pandas分析代碼,只返回代碼,不要解釋。"},
                {"role": "user", "content": f"""
數據列:{json.dumps(columns_info)}

任務:{task}

請生成Python代碼。
                """}
            ]
        )
        
        return response.choices[0].message.content

# 使用
import pandas as pd

df = pd.read_csv("sales.csv")
assistant = DataAnalysisAssistant()

# 分析
print(assistant.analyze_dataframe(df, "哪個月的銷售額最高?"))

# 推薦可視化
viz = assistant.suggest_visualization(df)
print(f"推薦圖表: {viz}")

# 生成代碼
code = assistant.generate_code(df, "計算每月銷售額並繪製折線圖")
exec(code)

成本優化策略

1. 模型選擇優化

def smart_model_selection(task_type: str, complexity: str) -> str:
    """智能選擇模型"""
    model_map = {
        ("simple", "low"): "gpt-4o-mini",      # 簡單分類、摘要
        ("simple", "medium"): "gpt-4o-mini",   # 基礎問答
        ("complex", "medium"): "gpt-4o",       # 代碼生成
        ("complex", "high"): "gpt-4o",         # 複雜推理
        ("coding", "low"): "gpt-4o-mini",      # 簡單代碼
        ("coding", "high"): "o1-mini",         # 複雜算法
    }
    return model_map.get((task_type, complexity), "gpt-4o-mini")

2. 緩存機制

import hashlib
from functools import lru_cache
import json

class APICache:
    """API調用緩存"""
    
    def __init__(self):
        self.cache = {}
    
    def _hash_key(self, messages: List[Dict], model: str) -> str:
        """生成緩存鍵"""
        content = json.dumps(messages, sort_keys=True) + model
        return hashlib.md5(content.encode()).hexdigest()
    
    def get(self, messages: List[Dict], model: str) -> str:
        """獲取緩存"""
        key = self._hash_key(messages, model)
        return self.cache.get(key)
    
    def set(self, messages: List[Dict], model: str, response: str):
        """設置緩存"""
        key = self._hash_key(messages, model)
        self.cache[key] = response

cache = APICache()

def cached_chat(messages: List[Dict], model: str = "gpt-4o-mini") -> str:
    """帶緩存的聊天"""
    # 檢查緩存
    cached = cache.get(messages, model)
    if cached:
        return cached
    
    # 調用API
    response = client.chat.completions.create(
        model=model,
        messages=messages
    )
    
    result = response.choices[0].message.content
    cache.set(messages, model, result)
    
    return result

3. Token優化

def optimize_messages(messages: List[Dict], max_tokens: int = 4000) -> List[Dict]:
    """優化消息以減少token"""
    # 保留系統消息
    system_messages = [m for m in messages if m["role"] == "system"]
    
    # 壓縮歷史消息
    history_messages = [m for m in messages if m["role"] != "system"]
    
    # 保留最近的消息
    optimized = system_messages + history_messages[-10:]
    
    return optimized

安全最佳實踐

1. 輸入驗證

import re
from typing import Optional

def sanitize_input(user_input: str) -> Optional[str]:
    """清理用戶輸入"""
    # 檢查長度
    if len(user_input) > 10000:
        return None
    
    # 檢查敏感模式
    forbidden_patterns = [
        "ignore previous instructions",
        "ignore all previous",
        "system:",
        " disregard",
    ]
    
    lower_input = user_input.lower()
    for pattern in forbidden_patterns:
        if pattern in lower_input:
            return None
    
    return user_input.strip()

2. 敏感信息過濾

def filter_sensitive_info(text: str) -> str:
    """過濾敏感信息"""
    # 過濾API密鑰
    text = re.sub(r'sk-[a-zA-Z0-9]{48}', '[API_KEY_REDACTED]', text)
    
    # 過濾郵箱
    text = re.sub(r'\b[\w.-]+@[\w.-]+\.\w+\b', '[EMAIL_REDACTED]', text)
    
    # 過濾電話
    text = re.sub(r'\b\d{11}\b', '[PHONE_REDACTED]', text)
    
    # 過濾信用卡
    text = re.sub(r'\b\d{16}\b', '[CARD_REDACTED]', text)
    
    return text

3. 內容審核

def moderate_content(text: str) -> bool:
    """內容審核"""
    response = client.moderations.create(
        model="omni-moderation-latest",
        input=text
    )
    
    result = response.results[0]
    return not result.flagged

常見問題

Q1: 如何處理超時?

import signal
from contextlib import contextmanager

@contextmanager
def timeout(seconds: int):
    """超時上下文管理器"""
    def timeout_handler(signum, frame):
        raise TimeoutError("請求超時")
    
    signal.signal(signal.SIGALRM, timeout_handler)
    signal.alarm(seconds)
    try:
        yield
    finally:
        signal.alarm(0)

# 使用
try:
    with timeout(30):
        response = client.chat.completions.create(...)
except TimeoutError:
    print("請求超時,請稍後重試")

Q2: 如何實現流式輸出到Web前端?

from fastapi import FastAPI
from fastapi.responses import StreamingResponse
import asyncio

app = FastAPI()

@app.post("/chat/stream")
async def chat_stream(prompt: str):
    async def generate():
        stream = client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[{"role": "user", "content": prompt}],
            stream=True
        )
        
        for chunk in stream:
            if chunk.choices[0].delta.content:
                yield f"data: {chunk.choices[0].delta.content}\n\n"
        
        yield "data: [DONE]\n\n"
    
    return StreamingResponse(generate(), media_type="text/event-stream")

Q3: 如何批量處理大量請求?

import asyncio
from openai import AsyncOpenAI

async_client = AsyncOpenAI()

async def process_batch(prompts: List[str], batch_size: int = 10) -> List[str]:
    """異步批量處理"""
    results = []
    
    for i in range(0, len(prompts), batch_size):
        batch = prompts[i:i+batch_size]
        tasks = [
            async_client.chat.completions.create(
                model="gpt-4o-mini",
                messages=[{"role": "user", "content": p}]
            )
            for p in batch
        ]
        
        responses = await asyncio.gather(*tasks)
        results.extend([r.choices[0].message.content for r in responses])
    
    return results

# 使用
prompts = ["問題1", "問題2", "問題3", ...]
results = asyncio.run(process_batch(prompts))

總結

通過本指南,你已經掌握了OpenAI API的核心使用方法:

  • ✅ 模型選擇與成本優化
  • ✅ 基礎文本生成與流式輸出
  • ✅ 多輪對話管理
  • ✅ 函數調用、圖像理解等高級功能
  • ✅ Assistant API使用方法
  • ✅ 錯誤處理與重試機制
  • ✅ 3個完整項目案例
  • ✅ 安全防護

💬 評論區

返回文章列表