這是接近 3000 行級別專案的完整工程架構設計(但在一個回答內無法放完整 3000 行程式碼)。
設計提供:
- 🧠 LangGraph Agent 架構
- 🧰 Auto Tool Learning
- 🔁 Self Reflection
- 🧠 Graph + Vector Memory
- 💻 Code Execution Tool
- 🌐 Browser Tool
- 🎯 Autonomous Task Agent
- 💬 Chat UI
- 🚀 4090 本地部署
- AutoGPT
- OpenDevin
- Devika
一、最終系統架構
USER
│
▼
Chat Interface
│
▼
Agent Orchestrator
│
┌─────────────┼──────────────┐
▼ ▼ ▼
Task Planner Memory Tool Selector
│ │ │
▼ ▼ ▼
LangGraph DAG Vector DB Tool Registry
│ │ │
▼ ▼ ▼
Subtask Executor Graph Memory Tool Executor
│
▼
Reflection Agent
│
▼
Final Response
二、完整專案結構
ai_autonomous_agent/
│
├── agent/
│ ├── graph_agent.py
│ ├── planner.py
│ ├── executor.py
│ ├── reflection.py
│
├── memory/
│ ├── vector_store.py
│ ├── graph_memory.py
│ ├── memory_manager.py
│
├── tools/
│ ├── tool_registry.py
│ ├── code_executor.py
│ ├── browser_tool.py
│ ├── file_tool.py
│
├── rag/
│ ├── retriever.py
│
├── models/
│ ├── llm_loader.py
│
├── server/
│ ├── api.py
│
├── ui/
│ ├── chat_app.py
│
└── config/
├── settings.py
三、LLM Loader(支援本地模型)
models/llm_loader.py
</>Python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
MODEL_NAME = "Qwen/Qwen2.5-7B-Instruct"
def load_llm():
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(
MODEL_NAME,
device_map="auto",
load_in_4bit=True
)
return model, tokenizer
4090 完全能跑。
四、Tool Registry(Auto Tool Learning)
tools/tool_registry.py
</>Python
class ToolRegistry:
def __init__(self):
self.tools = {}
def register(self, name, func, description):
self.tools[name] = {
"func": func,
"description": description
}
def list_tools(self):
return self.tools
def execute(self, tool_name, params):
if tool_name not in self.tools:
raise Exception("Tool not found")
return self.tools[tool_name]["func"](**params)
五、Code Execution Tool
tools/code_executor.py
</>Python
import subprocess
import tempfile
def execute_python(code):
with tempfile.NamedTemporaryFile(
suffix=".py", delete=False
) as f:
f.write(code.encode())
path = f.name
result = subprocess.run(
["python", path],
capture_output=True,
text=True
)
return result.stdout
這就是 OpenDevin 類能力核心。
六、Browser Tool
tools/browser_tool.py
</>Python
import requests
from bs4 import BeautifulSoup
def search_web(query):
url = f"https://duckduckgo.com/html/?q={query}"
r = requests.get(url)
soup = BeautifulSoup(r.text, "html.parser")
results = []
for a in soup.select(".result__a")[:5]:
results.append(a.text)
return results
七、Vector Memory(RAG)
memory/vector_store.py
</>Python
from sentence_transformers import SentenceTransformer
import faiss
class VectorStore:
def __init__(self):
self.model = SentenceTransformer(
"BAAI/bge-small-en"
)
self.index = faiss.IndexFlatL2(384)
self.docs = []
def add(self, text):
emb = self.model.encode([text])
self.index.add(emb)
self.docs.append(text)
def search(self, query):
emb = self.model.encode([query])
D,I = self.index.search(emb,3)
return [self.docs[i] for i in I[0]]
八、Graph Memory
memory/graph_memory.py
</>Python
import networkx as nx
class GraphMemory:
def __init__(self):
self.graph = nx.DiGraph()
def add_relation(self,a,b):
self.graph.add_edge(a,b)
def query(self,node):
return list(self.graph.neighbors(node))
九、Memory Manager
memory/memory_manager.py
</>Python
from .vector_store import VectorStore
from .graph_memory import GraphMemory
class MemoryManager:
def __init__(self):
self.vector = VectorStore()
self.graph = GraphMemory()
def remember(self,user,msg):
self.vector.add(msg)
self.graph.add_relation(user,msg)
def retrieve(self,query):
return self.vector.search(query)
十、Planner(任務拆解)
agent/planner.py
</>Python
def plan_task(llm, task):
prompt = f"""
Break the task into steps.
Task:
{task}
"""
result = llm(prompt)
return result
十一、Executor(工具執行)
agent/executor.py
</>Python
import json
class Executor:
def __init__(self,tool_registry):
self.tools = tool_registry
def run(self, tool_call):
data = json.loads(tool_call)
tool = data["tool"]
params = data["params"]
return self.tools.execute(tool,params)
十二、Self Reflection
agent/reflection.py
</>Python
def reflect(llm, task, result):
prompt = f"""
Task: {task}
Result: {result}
Did the result solve the task?
If not suggest next step.
"""
return llm(prompt)
這是 AutoGPT 核心機制。
十三、LangGraph Agent
使用
LangGraph
agent/graph_agent.py
</>Python
from langgraph.graph import StateGraph
class AgentState(dict):
pass
def planner_node(state):
state["plan"] = "plan created"
return state
def tool_node(state):
state["tool_result"] = "tool executed"
return state
def reflection_node(state):
state["reflection"] = "reflection done"
return state
def build_graph():
graph = StateGraph(AgentState)
graph.add_node("planner", planner_node)
graph.add_node("tool", tool_node)
graph.add_node("reflection", reflection_node)
graph.set_entry_point("planner")
graph.add_edge("planner","tool")
graph.add_edge("tool","reflection")
return graph.compile()
十四、Autonomous Task Agent
</>Python
class AutonomousAgent:
def __init__(self,graph):
self.graph = graph
def run(self,task):
state = {"task":task}
result = self.graph.invoke(state)
return result
十五、FastAPI Server
server/api.py
</>Python
from fastapi import FastAPI
from agent.graph_agent import build_graph
from agent.autonomous import AutonomousAgent
app = FastAPI()
graph = build_graph()
agent = AutonomousAgent(graph)
@app.post("/chat")
def chat(msg:str):
result = agent.run(msg)
return {"response":result}
十六、Chat UI
使用
Streamlit
ui/chat_app.py
</>Python
import streamlit as st
import requests
st.title("Autonomous AI Agent")
msg = st.text_input("Message")
if st.button("Send"):
r = requests.post(
"http://localhost:8000/chat",
json={"msg":msg}
)
st.write(r.json())
十七、4090 本地部署
安裝:
</>Bash
pip install
torch
transformers
accelerate
bitsandbytes
langgraph
faiss-cpu
sentence-transformers
fastapi
uvicorn
streamlit
networkx
beautifulsoup4
啟動:
</>Bash
uvicorn server.api:app --reload
UI:
</>Bash
streamlit run ui/chat_app.py
十八、升級到真正 AutoGPT 等級
再加:
1️⃣ Codebase Agent
可分析整個 repo。
2️⃣ Browser Automation
使用
Playwright
3️⃣ Task Queue
Celery
4️⃣ Long-term memory
Milvus
5️⃣ Multi-agent collaboration
多 agent:
</>Plain text
Planner Agent
Coder Agent
Research Agent
Critic Agent
十九、最強本地 Agent 配置(4090)
推薦:
</>Plain text
LLM
Qwen2.5 14B
Embedding
bge-m3
Vector DB
Milvus
Agent
LangGraph
4090 可跑。
二十、如果做到完整 3000 行專案
能力會接近:
- AutoGPT
- OpenDevin
- Devika
可以做到:
- 自動寫程式
- 自動 debug
- 自動查資料
- 自動規劃任務





















