星辰大海 AI星辰大海 AI
首页
  • ChatGPT
  • Claude
  • Midjourney
  • Stable Diffusion
  • 大语言模型
  • 图像生成模型
  • 语音模型
Demo 示例
开发笔记
GitHub
首页
  • ChatGPT
  • Claude
  • Midjourney
  • Stable Diffusion
  • 大语言模型
  • 图像生成模型
  • 语音模型
Demo 示例
开发笔记
GitHub
  • Demo 示例

    • Demo 示例
    • 聊天机器人 Demo
    • 图像生成 Demo
    • API 集成示例

聊天机器人 Demo

这是我基于大语言模型构建的聊天机器人示例。

功能特性

  • 支持多轮对话
  • 上下文记忆
  • 流式输出
  • 错误处理

技术栈

  • Python 3.8+
  • OpenAI API / Anthropic API
  • FastAPI (可选,用于 Web 服务)

基础版本

简单聊天机器人

import openai

class SimpleChatbot:
    def __init__(self, api_key):
        self.api_key = api_key
        self.messages = []
        openai.api_key = api_key
    
    def chat(self, user_input):
        self.messages.append({"role": "user", "content": user_input})
        
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=self.messages
        )
        
        assistant_message = response.choices[0].message.content
        self.messages.append({"role": "assistant", "content": assistant_message})
        
        return assistant_message

# 使用示例
bot = SimpleChatbot(api_key="your-api-key")
print(bot.chat("你好"))
print(bot.chat("介绍一下 Python"))

流式输出版本

import openai
import sys

class StreamingChatbot:
    def __init__(self, api_key):
        self.api_key = api_key
        self.messages = []
        openai.api_key = api_key
    
    def chat_stream(self, user_input):
        self.messages.append({"role": "user", "content": user_input})
        
        stream = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=self.messages,
            stream=True
        )
        
        full_response = ""
        for chunk in stream:
            if chunk.choices[0].delta.get("content"):
                content = chunk.choices[0].delta["content"]
                print(content, end="", flush=True)
                full_response += content
        
        print()  # 换行
        self.messages.append({"role": "assistant", "content": full_response})
        return full_response

# 使用示例
bot = StreamingChatbot(api_key="your-api-key")
bot.chat_stream("写一个 Python 函数来计算斐波那契数列")

Web 服务版本

FastAPI 实现

from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from typing import List, Dict
import openai

app = FastAPI()

class ChatMessage(BaseModel):
    role: str
    content: str

class ChatRequest(BaseModel):
    messages: List[ChatMessage]

class ChatResponse(BaseModel):
    message: str

@app.post("/chat", response_model=ChatResponse)
async def chat(request: ChatRequest):
    try:
        messages = [{"role": msg.role, "content": msg.content} for msg in request.messages]
        
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=messages
        )
        
        return ChatResponse(
            message=response.choices[0].message.content
        )
    except Exception as e:
        raise HTTPException(status_code=500, detail=str(e))

# 运行: uvicorn main:app --reload

前端示例(HTML + JavaScript)

<!DOCTYPE html>
<html>
<head>
    <title>AI 聊天机器人</title>
    <style>
        #chat-container {
            max-width: 800px;
            margin: 0 auto;
            padding: 20px;
        }
        #messages {
            height: 400px;
            overflow-y: auto;
            border: 1px solid #ccc;
            padding: 10px;
            margin-bottom: 10px;
        }
        .message {
            margin-bottom: 10px;
        }
        .user {
            text-align: right;
            color: blue;
        }
        .assistant {
            text-align: left;
            color: green;
        }
    </style>
</head>
<body>
    <div id="chat-container">
        <div id="messages"></div>
        <input type="text" id="user-input" placeholder="输入消息...">
        <button onclick="sendMessage()">发送</button>
    </div>

    <script>
        const messages = [];
        
        async function sendMessage() {
            const input = document.getElementById('user-input');
            const userMessage = input.value;
            input.value = '';
            
            addMessage('user', userMessage);
            messages.push({role: 'user', content: userMessage});
            
            try {
                const response = await fetch('/chat', {
                    method: 'POST',
                    headers: {'Content-Type': 'application/json'},
                    body: JSON.stringify({messages: messages})
                });
                
                const data = await response.json();
                addMessage('assistant', data.message);
                messages.push({role: 'assistant', content: data.message});
            } catch (error) {
                console.error('Error:', error);
            }
        }
        
        function addMessage(role, content) {
            const messagesDiv = document.getElementById('messages');
            const messageDiv = document.createElement('div');
            messageDiv.className = `message ${role}`;
            messageDiv.textContent = content;
            messagesDiv.appendChild(messageDiv);
            messagesDiv.scrollTop = messagesDiv.scrollHeight;
        }
    </script>
</body>
</html>

高级功能

上下文管理

class ContextAwareChatbot:
    def __init__(self, api_key, max_context=10):
        self.api_key = api_key
        self.messages = []
        self.max_context = max_context
        openai.api_key = api_key
    
    def chat(self, user_input):
        self.messages.append({"role": "user", "content": user_input})
        
        # 限制上下文长度
        if len(self.messages) > self.max_context * 2:
            # 保留系统消息和最近的对话
            self.messages = (
                [self.messages[0]] + 
                self.messages[-(self.max_context * 2 - 1):]
            )
        
        response = openai.ChatCompletion.create(
            model="gpt-3.5-turbo",
            messages=self.messages
        )
        
        assistant_message = response.choices[0].message.content
        self.messages.append({"role": "assistant", "content": assistant_message})
        
        return assistant_message

错误处理和重试

import time
import random

class RobustChatbot:
    def __init__(self, api_key, max_retries=3):
        self.api_key = api_key
        self.messages = []
        self.max_retries = max_retries
        openai.api_key = api_key
    
    def chat(self, user_input):
        self.messages.append({"role": "user", "content": user_input})
        
        for attempt in range(self.max_retries):
            try:
                response = openai.ChatCompletion.create(
                    model="gpt-3.5-turbo",
                    messages=self.messages
                )
                
                assistant_message = response.choices[0].message.content
                self.messages.append({
                    "role": "assistant",
                    "content": assistant_message
                })
                
                return assistant_message
                
            except Exception as e:
                if attempt == self.max_retries - 1:
                    raise
                wait_time = (2 ** attempt) + random.uniform(0, 1)
                time.sleep(wait_time)
                print(f"重试 {attempt + 1}/{self.max_retries}...")

运行说明

安装依赖

pip install openai fastapi uvicorn

运行基础版本

python simple_chatbot.py

运行 Web 服务

uvicorn main:app --reload

然后访问 http://localhost:8000

扩展建议

  1. 添加记忆功能: 使用向量数据库存储历史对话
  2. 多模型支持: 支持切换不同的 LLM
  3. 插件系统: 支持功能插件扩展
  4. 用户认证: 添加用户登录和会话管理
  5. 数据分析: 记录对话数据用于分析
在 GitHub 上编辑此页
Prev
Demo 示例
Next
图像生成 Demo