문서 추가: Context Engineering 문서 추가 및 deepagents_sourcecode 한국어 번역

- Context_Engineering.md: 에이전트 컨텍스트 엔지니어링 개념 정리 문서 추가
- Context_Engineering_Research.ipynb: 연구 노트북 업데이트
- deepagents_sourcecode/: docstring과 주석을 한국어로 번역
This commit is contained in:
HyunjunJeon
2026-01-11 17:55:52 +09:00
parent 9a4ae41c84
commit af5fbfabec
76 changed files with 3026 additions and 1471 deletions

211
Context_Engineering.md Normal file
View File

@@ -0,0 +1,211 @@
# Context Engineering
**Context Engineering(컨텍스트 엔지니어링)**을 “에이전트를 실제로 잘 동작하게 만드는 문맥 설계/운영 기술”로 정리합니다.
- YouTube: https://www.youtube.com/watch?v=6_BcCthVvb8
- PDF: `Context Engineering LangChain.pdf`
- PDF: `Manus Context Engineering LangChain Webinar.pdf`
---
## 1) Context Engineering이란?
YouTube 발표에서는 Context Engineering을 다음처럼 정의합니다.
- **“다음 스텝에 필요한 ‘딱 그 정보’만을 컨텍스트 윈도우에 채워 넣는 섬세한 기술(art)과 과학(science)”**
(영상 약 203초 부근: “the delicate art and science of filling the context window…”)
이 정의는 “좋은 프롬프트 한 줄”의 문제가 아니라, **에이전트가 수십~수백 턴을 거치며(tool call 포함) 컨텍스트가 폭발적으로 커지는 상황에서**:
- 무엇을 **남기고**
- 무엇을 **버리고**
- 무엇을 **외부로 빼고(오프로딩)**
- 무엇을 **필요할 때만 다시 가져오며(리트리벌)**
- 무엇을 **격리하고(아이솔레이션)**
- 무엇을 **캐시로 비용/지연을 줄이는지(캐싱)**
를 체계적으로 설계/운영하는 문제로 확장됩니다. (LangChain PDF: “Context grows w/ agents”, “Typical task… 50 tool calls”, “hundreds of turns”)
---
## 2) 왜 지금 “Context Engineering”인가?
### 2.1 에이전트에서 컨텍스트는 ‘성장’한다
LangChain PDF는 에이전트가 등장하면서 컨텍스트가 본질적으로 커진다고 강조합니다.
- 한 작업이 **많은 도구 호출(tool calls)**을 필요로 하고(예: Manus의 “around 50 tool calls”),
- 운영 환경에서는 **수백 턴**의 대화/관찰이 누적됩니다.
### 2.2 컨텍스트가 커질수록 성능이 떨어질 수 있다 (“context rot”)
LangChain PDF는 `context-rot` 자료를 인용하며 **컨텍스트 길이가 늘수록 성능이 하락할 수 있음**을 지적합니다.
즉, “더 많이 넣으면 더 똑똑해진다”는 직관이 깨집니다.
### 2.3 컨텍스트 실패 모드(실패 유형)들이 반복 관측된다
LangChain PDF는 컨텍스트가 커질 때의 대표적인 실패를 4가지로 제시합니다.
- **Context Poisoning**: 잘못된 정보가 섞여 이후 의사결정/행동을 오염
- **Context Distraction**: 중요한 목표보다 반복/쉬운 패턴에 끌림(장기 컨텍스트에서 새 계획보다 반복 행동 선호 등)
- **Context Confusion**: 도구가 많고 비슷할수록 잘못된 도구 호출/비존재 도구 호출 등 혼란 증가
- **Context Clash**: 연속된 관찰/도구 결과가 서로 모순될 때 성능 하락
이 실패 모드들은 “프롬프트를 더 잘 쓰자”로는 해결이 어렵고, **컨텍스트의 구조·유지·정리·검증 전략**이 필요합니다.
### 2.4 Manus 관점: “모델을 건드리기보다 컨텍스트를 설계하라”
Manus PDF는 Context Engineering을 “앱과 모델 사이의 가장 실용적인 경계”라고 강조합니다.
- “Context Engineering is the clearest and most practical boundary between application and model.” (Manus PDF)
이 관점에서 Manus는 제품 개발 과정에서 흔히 빠지는 2가지 함정을 지적합니다.
- **The First Trap**: “차라리 우리 모델을 따로 학습(파인튜닝)하지?”라는 유혹
→ 현실적으로 모델 반복 속도가 제품 반복 속도를 제한할 수 있고(PMF 이전에는 특히), 이 선택이 제품 개발을 늦출 수 있음
- **The Second Trap**: “액션 스페이스+리워드+롤아웃(RL)로 최적화하자”
→ 하지만 이후 MCP 같은 확장 요구가 들어오면 다시 설계를 뒤엎게 될 수 있으니, 기반 모델 회사가 이미 잘하는 영역을 불필요하게 재구축하지 말 것
요약하면, “모델을 바꾸는 일”을 서두르기 전에 **컨텍스트를 어떻게 구성/축소/격리/리트리벌할지**를 먼저 해결하라는 메시지입니다.
---
## 3) Context Engineering의 5가지 핵심 레버(지렛대)
LangChain PDF(및 영상 전개)는 공통적으로 다음 5가지를 핵심 테마로 제시합니다.
### 3.1 Offload (컨텍스트 오프로딩)
**핵심 아이디어**: 모든 정보를 메시지 히스토리에 남길 필요가 없습니다. 토큰이 큰 결과는 **파일시스템/외부 저장소로 보내고**, 요약/참조(포인터)만 남깁니다.
- LangChain PDF: “Use file system for notes / todo.md / tok-heavy context / long-term memories”
- 영상: “you don't need all context to live in this messages history… offload it… it can be retrieved later”
**실무 패턴**
- 대용량 tool output은 파일로 저장하고, 대화에는 `저장 경로 + 요약 + 재로드 방법`만 남기기
- “작업 브리프/계획(todo)”를 파일로 유지해 긴 대화에서도 방향성을 잃지 않기
### 3.2 Reduce (컨텍스트 축소: Prune/Compaction/Summarization)
**핵심 아이디어**: 컨텍스트가 커질수록 성능/비용이 악화될 수 있으므로, “넣는 기술”뿐 아니라 “빼는 기술”이 필요합니다.
LangChain PDF는 “Summarize / prune message history”, “Summarize / prune tool call outputs”를 언급하면서도, 정보 손실 위험을 경고합니다.
Manus PDF는 **Compaction vs Summarization**을 별도 토픽으로 둡니다.
- **Compaction(압축/정리)**: 불필요한 원문을 제거하거나, 구조화된 형태로 재배치/정돈해 “같은 의미를 더 적은 토큰으로” 담기
- **Summarization(요약)**: 모델이 자연어 요약을 생성해 토큰을 줄이되, 정보 손실 가능
**실무 패턴**
- “도구 결과 원문”은 저장(Offload)하고 대화에는 “관찰(Observations) 요약”만 유지
- 일정 기준(예: 컨텍스트 사용량 임계치, 턴 수, 주기)마다 요약/정리 실행
- 요약은 “결정/근거/미해결 질문/다음 행동” 같은 스키마로 구조화(손실 최소화)
### 3.3 Retrieve (필요할 때만 가져오기)
**핵심 아이디어**: 오프로딩/축소로 비운 자리를 “아무거나로 채우지 말고”, **현재 스텝에 필요한 것만** 검색·리트리벌로 가져옵니다.
LangChain PDF는 “Mix of retrieval methods + re-ranking”, “Systems to assemble retrievals into prompts”, “Retrieve relevant tools based upon tool descriptions” 등 “검색 결과를 프롬프트에 조립하는 시스템”을 강조합니다.
**실무 패턴**
- 파일/노트/로그에서 grep/glob/키워드 기반 검색(결정적이고 디버그 가능)
- 리트리벌 결과는 “왜 가져왔는지(근거)”와 함께 삽입해 모델이 사용 이유를 이해하도록 구성
- 도구 설명도 리트리벌 대상: “필요한 도구만 로딩”하여 혼란(Context Confusion) 완화
### 3.4 Isolate (컨텍스트 격리)
**핵심 아이디어**: 한 컨텍스트에 모든 역할을 때려 넣으면 오염/충돌이 늘어납니다. 작업을 쪼개 **서브 에이전트(서브 컨텍스트)**로 격리합니다.
LangChain PDF는 multi-agent로 컨텍스트를 분리하되, 의사결정 충돌 위험을 경고합니다(“Multi-agents make conflicting decisions…”).
Manus PDF는 “언어/동시성(concurrency)에서 배운 지혜”를 빌려오며, Go 블로그의 문구를 인용합니다.
- “Do not communicate by sharing memory; instead, share memory by communicating.” (Manus PDF)
즉, **공유 메모리(거대한 공용 컨텍스트)**로 동기화하려 하지 말고,
역할별 컨텍스트를 분리한 뒤 **명시적 메시지/산출물(요약, 브리프, 결과물)**로만 조율하라는 뜻입니다.
### 3.5 Cache (반복 컨텍스트 캐싱)
**핵심 아이디어**: 에이전트는 시스템 프롬프트/도구 설명/정책 같은 “상대적으로 불변(stable)”한 토큰이 반복됩니다. 이를 캐싱하면 비용과 지연을 크게 줄일 수 있습니다.
LangChain PDF:
- “Cache agent instructions, tool descriptions to prefix”
- “Add mutable context / recent observations to suffix”
- (특정 프로바이더 기준) “Cached input tokens … 10x cheaper!” 같은 비용 레버를 언급
**실무 패턴**
- 프롬프트를 **prefix(불변: 지침/도구/정책)** + **suffix(가변: 최근 관찰/상태)**로 분리해 캐시 효율 극대화
- 캐시 안정성을 해치지 않도록: 자주 변하는 내용을 system/prefix에 섞지 않기
---
## 4) “도구”도 컨텍스트를 더럽힌다: Tool Offloading과 계층적 액션 스페이스
Manus PDF는 오프로딩을 “메모리(파일)로만” 생각하면 부족하다고 말합니다.
- “tools themselves also clutter context”
- “Too many tools → confusion, invalid calls”
그래서 **도구 자체도 오프로딩/계층화**해야 한다고 제안합니다(“Offload tools through a hierarchical action space”).
Manus PDF가 제시하는 3단계 추상화(계층):
1. **Function Calling**
- 장점: 스키마 안전, 표준적
- 단점: 변경 시 캐시가 자주 깨짐, 도구가 많아지면 혼란 증가(Context Confusion)
2. **Sandbox Utilities (CLI/셸 유틸리티)**
- 모델 컨텍스트를 바꾸지 않고도 기능을 확장 가능
- 큰 출력은 파일로 저장시키기 쉬움(오프로딩과 결합)
3. **Packages & APIs (스크립트로 사전 승인된 API 호출)**
- 데이터가 크거나 연쇄 호출이 필요한 작업에 유리
- 목표: “Keep model context clean, use memory for reasoning only.”
요지는 “모든 것을 함수 목록으로 나열”하는 대신, **상위 레벨의 안정적인 인터페이스**를 제공해 컨텍스트를 작게 유지하고, 필요 시 하위 레벨로 내려가게 만드는 설계입니다.
---
## 5) 운영 관점 체크리스트 (실무 적용용)
아래는 위 소스들의 공통 테마를 “운영 가능한 체크리스트”로 정리한 것입니다.
1. **컨텍스트 예산(Budget) 정의**: 모델 윈도우/비용/지연을 고려해 “언제 줄일지” 임계치를 정한다.
2. **오프로딩 기본값**: 큰 tool output은 원문을 남기지 말고 파일/스토리지로 보내며, 포인터를 남긴다.
3. **축소 전략 계층화**: (가능하면) Compaction/Prune → Summarization 순으로 비용·손실을 제어한다.
4. **리트리벌 조립(Assembly)**: 검색은 끝이 아니라 시작이다. “무엇을, 왜, 어떤 순서로” 넣는 조립 로직이 필요하다.
5. **격리 기준 수립**: “오염될 수 있는 작업(탐색/긴 출력/다단계 분석)”을 서브 에이전트로 분리한다.
6. **캐시 친화 프롬프트**: prefix(불변) / suffix(가변)로 분리한다.
7. **실패 모드 방어**: Poisoning/Distraction/Confusion/Clash를 로그로 관측하고, 완화책(정리·검증·도구 로딩 제한·모순 검사)을 붙인다.
8. **과잉 엔지니어링 경계**: Manus PDF의 경고처럼 “더 많은 컨텍스트 ≠ 더 많은 지능”. 최대 성과는 종종 “추가”가 아니라 “제거”에서 나온다.
---
## 6) `Context_Engineering_Research.ipynb` 평가: 이 문서를 ‘잘 표현’하고 있는가?
### 결론(요약)
- **5대 전략(Offload/Reduce/Retrieve/Isolate/Cache)을 설명하는 데는 충분히 좋습니다.**
- 최근 업데이트로 **“왜 문제인지”, 실패 모드, Tool Offloading** 개요가 추가되어 설명력은 좋아졌지만, 이를 뒷받침하는 **재현 실험/운영 규칙**은 여전히 보완이 필요합니다.
### 잘 표현하는 부분
- **5가지 핵심 전략을 명시적으로 구조화**해 설명합니다(표 + 섹션 구성).
- Offloading/Reduction/Retrieval/Isolation/Caching을 각각:
- “원리(트리거/효과)”
- “구현(DeepAgents 미들웨어/설정)”
- “간단한 시뮬레이션 실험”
로 연결해, “개념 → 구현” 흐름이 명확합니다.
### 부족한 부분(이 문서 대비 갭)
- **실패 모드 4종(Poisoning/Distraction/Confusion/Clash)**을 “설명”하긴 하지만, 노트북의 **실험/설계에 직접 반영**(재현→완화 비교)되어 있지 않습니다.
- Manus PDF의 중요한 관점인 **“도구도 컨텍스트를 더럽힌다”**(Tool Offloading)도 개요는 추가되었지만, **도구 과다/유사 도구로 인한 혼란**을 줄이는 실험이 없습니다.
- Manus PDF의 메시지인 **“과잉 컨텍스트/과잉 설계 경계(Removing, not adding)”**가 체크리스트/의사결정 가이드로 정리되어 있지 않습니다.
### “Context Engineering 설명”을 완성도 높게 만들기 위한 보완 제안(노트북 기준)
노트북을 “5가지 전략 데모”에서 “Context Engineering 설명서”로 끌어올리려면, 아래 4가지만 추가해도 체감이 큽니다.
1. (완료) **서론 강화(문제 정의)**: 컨텍스트 성장, context rot, 실패 모드 4종을 첫 섹션에서 명시.
2. (추가됨 - 시뮬레이션) **실패 모드별 미니 실험**: Confusion(도구 과다/유사 도구)·Clash(모순 tool output)·Distraction(장기 로그에서 반복행동)·Poisoning(검증되지 않은 사실) 재현/완화 시뮬레이션 추가.
3. (완료) **Tool Offloading/계층 설계 섹션**: “도구를 리트리벌로 로딩/제한”하거나 “상위 래퍼 도구로 단순화”하는 패턴 소개.
4. **‘삭제’ 중심의 운영 규칙**: 언제 넣고/빼고/격리할지 규칙(임계치, 주기, 스키마)과 로그 지표(토큰/비용/실패율) 추가.

File diff suppressed because it is too large Load Diff

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env python3
"""
Ralph Mode - Autonomous looping for DeepAgents
"""Ralph Mode - DeepAgents를 위한 자율 루프 실행 예제입니다.
Ralph is an autonomous looping pattern created by Geoff Huntley.
Each loop starts with fresh context. The filesystem and git serve as memory.
@@ -10,24 +9,24 @@ Usage:
python ralph_mode.py "Build a Python course. Use git."
python ralph_mode.py "Build a REST API" --iterations 5
"""
import warnings
warnings.filterwarnings("ignore", message="Core Pydantic V1 functionality")
import argparse
import asyncio
import tempfile
import warnings
from pathlib import Path
from deepagents_cli.agent import create_cli_agent
from deepagents_cli.config import console, COLORS, SessionState, create_model
from deepagents_cli.config import COLORS, SessionState, console, create_model
from deepagents_cli.execution import execute_task
from deepagents_cli.ui import TokenTracker
warnings.filterwarnings("ignore", message="Core Pydantic V1 functionality")
async def ralph(task: str, max_iterations: int = 0, model_name: str = None):
"""Run agent in Ralph loop with beautiful CLI output."""
async def ralph(task: str, max_iterations: int = 0, model_name: str | None = None) -> None:
"""Ralph 루프 패턴으로 에이전트를 반복 실행합니다."""
work_dir = tempfile.mkdtemp(prefix="ralph-")
model = create_model(model_name)
agent, backend = create_cli_agent(
model=model,
@@ -37,19 +36,21 @@ async def ralph(task: str, max_iterations: int = 0, model_name: str = None):
)
session_state = SessionState(auto_approve=True)
token_tracker = TokenTracker()
console.print(f"\n[bold {COLORS['primary']}]Ralph Mode[/bold {COLORS['primary']}]")
console.print(f"[dim]Task: {task}[/dim]")
console.print(f"[dim]Iterations: {'unlimited (Ctrl+C to stop)' if max_iterations == 0 else max_iterations}[/dim]")
console.print(
f"[dim]Iterations: {'unlimited (Ctrl+C to stop)' if max_iterations == 0 else max_iterations}[/dim]"
)
console.print(f"[dim]Working directory: {work_dir}[/dim]\n")
iteration = 1
try:
while max_iterations == 0 or iteration <= max_iterations:
console.print(f"\n[bold cyan]{'='*60}[/bold cyan]")
console.print(f"[bold cyan]RALPH ITERATION {iteration}[/bold cyan]")
console.print(f"[bold cyan]{'='*60}[/bold cyan]\n")
iter_display = f"{iteration}/{max_iterations}" if max_iterations > 0 else str(iteration)
prompt = f"""## Iteration {iter_display}
@@ -68,13 +69,13 @@ Make progress. You'll be called again."""
token_tracker,
backend=backend,
)
console.print(f"\n[dim]...continuing to iteration {iteration + 1}[/dim]")
iteration += 1
except KeyboardInterrupt:
console.print(f"\n[bold yellow]Stopped after {iteration} iterations[/bold yellow]")
# Show created files
console.print(f"\n[bold]Files created in {work_dir}:[/bold]")
for f in sorted(Path(work_dir).rglob("*")):
@@ -82,7 +83,7 @@ Make progress. You'll be called again."""
console.print(f" {f.relative_to(work_dir)}", style="dim")
def main():
def main() -> None:
parser = argparse.ArgumentParser(
description="Ralph Mode - Autonomous looping for DeepAgents",
formatter_class=argparse.RawDescriptionHelpFormatter,

View File

@@ -0,0 +1 @@
"""DeepAgents ACP 패키지입니다."""

View File

@@ -1,4 +1,4 @@
"""DeepAgents ACP server implementation."""
"""DeepAgents ACP 서버 구현입니다."""
from __future__ import annotations
@@ -7,38 +7,40 @@ import uuid
from typing import Any, Literal
from acp import (
PROTOCOL_VERSION,
Agent,
AgentSideConnection,
PROTOCOL_VERSION,
stdio_streams,
)
from acp.schema import (
AgentMessageChunk,
AgentPlanUpdate,
AgentThoughtChunk,
AllowedOutcome,
CancelNotification,
ContentToolCallContent,
DeniedOutcome,
Implementation,
InitializeRequest,
InitializeResponse,
LoadSessionRequest,
LoadSessionResponse,
NewSessionRequest,
NewSessionResponse,
PermissionOption,
PlanEntry,
PromptRequest,
PromptResponse,
SessionNotification,
TextContentBlock,
Implementation,
AgentThoughtChunk,
ToolCallProgress,
ContentToolCallContent,
LoadSessionResponse,
SetSessionModeResponse,
SetSessionModelResponse,
CancelNotification,
LoadSessionRequest,
SetSessionModeRequest,
SetSessionModelRequest,
AgentPlanUpdate,
PlanEntry,
PermissionOption,
RequestPermissionRequest,
AllowedOutcome,
DeniedOutcome,
SessionNotification,
SetSessionModelRequest,
SetSessionModelResponse,
SetSessionModeRequest,
SetSessionModeResponse,
TextContentBlock,
ToolCallProgress,
)
from acp.schema import (
ToolCall as ACPToolCall,
)
from deepagents import create_deep_agent
@@ -52,14 +54,14 @@ from langgraph.types import Command, Interrupt
class DeepagentsACP(Agent):
"""ACP Agent implementation wrapping deepagents."""
"""deepagents를 감싼 ACP Agent 구현체입니다."""
def __init__(
self,
connection: AgentSideConnection,
agent_graph: CompiledStateGraph,
) -> None:
"""Initialize the DeepAgents agent.
"""DeepAgents 에이전트를 초기화합니다.
Args:
connection: The ACP connection for communicating with the client
@@ -68,15 +70,15 @@ class DeepagentsACP(Agent):
self._connection = connection
self._agent_graph = agent_graph
self._sessions: dict[str, dict[str, Any]] = {}
# Track tool calls by ID for matching with ToolMessages
# Maps tool_call_id -> ToolCall TypedDict
# ToolMessage와 매칭하기 위해 tool_call_id별 tool call을 추적합니다.
# tool_call_id -> ToolCall TypedDict
self._tool_calls: dict[str, ToolCall] = {}
async def initialize(
self,
params: InitializeRequest,
) -> InitializeResponse:
"""Initialize the agent and return capabilities."""
"""에이전트를 초기화하고 capabilities를 반환합니다."""
return InitializeResponse(
protocolVersion=PROTOCOL_VERSION,
agentInfo=Implementation(
@@ -90,7 +92,7 @@ class DeepagentsACP(Agent):
self,
params: NewSessionRequest,
) -> NewSessionResponse:
"""Create a new session with a deepagents instance."""
"""DeepAgents 인스턴스로 새 세션을 생성합니다."""
session_id = str(uuid.uuid4())
# Store session state with the shared agent graph
self._sessions[session_id] = {
@@ -105,7 +107,7 @@ class DeepagentsACP(Agent):
params: PromptRequest,
message: AIMessageChunk,
) -> None:
"""Handle an AIMessageChunk and send appropriate notifications.
"""AIMessageChunk를 처리하고 적절한 알림(notification)을 전송합니다.
Args:
params: The prompt request parameters
@@ -159,7 +161,7 @@ class DeepagentsACP(Agent):
params: PromptRequest,
message: AIMessage,
) -> None:
"""Handle completed tool calls from an AIMessage and send notifications.
"""AIMessage의 완료된 tool call을 처리하고 알림(notification)을 전송합니다.
Args:
params: The prompt request parameters
@@ -214,7 +216,7 @@ class DeepagentsACP(Agent):
tool_call: ToolCall,
message: ToolMessage,
) -> None:
"""Handle a ToolMessage and send appropriate notifications.
"""ToolMessage를 처리하고 적절한 알림(notification)을 전송합니다.
Args:
params: The prompt request parameters
@@ -264,7 +266,7 @@ class DeepagentsACP(Agent):
params: PromptRequest,
todos: list[dict[str, Any]],
) -> None:
"""Handle todo list updates from the tools node.
"""Tools node에서 발생한 todo list 업데이트를 처리합니다.
Args:
params: The prompt request parameters
@@ -309,7 +311,7 @@ class DeepagentsACP(Agent):
params: PromptRequest,
interrupt: Interrupt,
) -> list[dict[str, Any]]:
"""Handle a LangGraph interrupt and request permission from the client.
"""LangGraph interrupt를 처리하고 클라이언트에 권한을 요청합니다.
Args:
params: The prompt request parameters
@@ -423,7 +425,7 @@ class DeepagentsACP(Agent):
stream_input: dict[str, Any] | Command,
config: dict[str, Any],
) -> list[Interrupt]:
"""Stream agent execution and handle updates, returning any interrupts.
"""에이전트 실행을 스트리밍하고 업데이트를 처리하며, interrupt가 있으면 반환합니다.
Args:
params: The prompt request parameters
@@ -496,7 +498,7 @@ class DeepagentsACP(Agent):
self,
params: PromptRequest,
) -> PromptResponse:
"""Handle a user prompt and stream responses."""
"""사용자 프롬프트를 처리하고 응답을 스트리밍합니다."""
session_id = params.sessionId
session = self._sessions.get(session_id)
@@ -541,50 +543,50 @@ class DeepagentsACP(Agent):
return PromptResponse(stopReason="end_turn")
async def authenticate(self, params: Any) -> Any | None:
"""Authenticate (optional)."""
# Authentication not required for now
"""인증 처리(선택)."""
# 현재는 인증이 필요하지 않습니다.
return None
async def extMethod(self, method: str, params: dict[str, Any]) -> dict[str, Any]:
"""Handle extension methods (optional)."""
"""Extension method 처리(선택)."""
raise NotImplementedError(f"Extension method {method} not supported")
async def extNotification(self, method: str, params: dict[str, Any]) -> None:
"""Handle extension notifications (optional)."""
"""Extension notification 처리(선택)."""
pass
async def cancel(self, params: CancelNotification) -> None:
"""Cancel a running session."""
# TODO: Implement cancellation logic
"""실행 중인 세션을 취소합니다."""
# TODO: 취소 로직 구현
pass
async def loadSession(
self,
params: LoadSessionRequest,
) -> LoadSessionResponse | None:
"""Load an existing session (optional)."""
# Not implemented yet - would need to serialize/deserialize session state
"""기존 세션 로드(선택)."""
# 미구현: 세션 state의 serialize/deserialize가 필요합니다.
return None
async def setSessionMode(
self,
params: SetSessionModeRequest,
) -> SetSessionModeResponse | None:
"""Set session mode (optional)."""
# Could be used to switch between different agent modes
"""세션 모드 설정(선택)."""
# 다른 agent mode로 전환하는 용도로 사용할 수 있습니다.
return None
async def setSessionModel(
self,
params: SetSessionModelRequest,
) -> SetSessionModelResponse | None:
"""Set session model (optional)."""
# Not supported - model is configured at agent graph creation time
"""세션 모델 설정(선택)."""
# 미지원: 모델은 agent graph 생성 시점에 고정됩니다.
return None
async def main() -> None:
"""Main entry point for running the ACP server."""
"""ACP 서버 실행용 메인 엔트리포인트입니다."""
# from deepagents_cli.agent import create_agent_with_config
# from deepagents_cli.config import create_model
# from deepagents_cli.tools import fetch_url, http_request, web_search
@@ -647,7 +649,7 @@ async def main() -> None:
def cli_main() -> None:
"""Synchronous CLI entry point for the ACP server."""
"""ACP 서버 실행용 동기 CLI 엔트리포인트입니다."""
asyncio.run(main())

View File

@@ -0,0 +1 @@
"""deepagents_acp 테스트 패키지입니다."""

View File

@@ -1,10 +1,8 @@
"""Fake chat models for testing purposes."""
"""테스트용 가짜(chat) 모델 구현입니다."""
import re
from collections.abc import Callable, Iterator, Sequence
from typing import Any, Literal, cast
from typing_extensions import override
from typing import Any, cast
from langchain_core.callbacks import CallbackManagerForLLMRun
from langchain_core.language_models import LanguageModelInput
@@ -13,10 +11,11 @@ from langchain_core.messages import AIMessage, AIMessageChunk, BaseMessage
from langchain_core.outputs import ChatGeneration, ChatGenerationChunk, ChatResult
from langchain_core.runnables import Runnable
from langchain_core.tools import BaseTool
from typing_extensions import override
class GenericFakeChatModel(BaseChatModel):
"""Generic fake chat model that can be used to test the chat model interface.
r"""Generic fake chat model that can be used to test the chat model interface.
* Chat model should be usable in both sync and async tests
* Invokes `on_llm_new_token` to allow for testing of callback related code for new
@@ -29,7 +28,7 @@ class GenericFakeChatModel(BaseChatModel):
- None (default): Return content in a single chunk (no streaming)
- A string delimiter (e.g., " "): Split content on this delimiter,
preserving the delimiter as separate chunks
- A regex pattern (e.g., r"(\\s)"): Split using the pattern with a capture
- A regex pattern (e.g., r"(\s)"): Split using the pattern with a capture
group to preserve delimiters
Examples:
@@ -52,7 +51,7 @@ class GenericFakeChatModel(BaseChatModel):
"""
messages: Iterator[AIMessage | str]
"""Get an iterator over messages.
"""메시지 이터레이터를 가져옵니다.
This can be expanded to accept other types like Callables / dicts / strings
to make the interface more generic if needed.
@@ -62,7 +61,7 @@ class GenericFakeChatModel(BaseChatModel):
"""
stream_delimiter: str | None = None
"""Delimiter for chunking content during streaming.
"""스트리밍 시 content를 chunk로 나누기 위한 delimiter입니다.
- None (default): No chunking, returns content in a single chunk
- String: Split content on this exact string, preserving delimiter as chunks
@@ -227,5 +226,5 @@ class GenericFakeChatModel(BaseChatModel):
tool_choice: str | None = None,
**kwargs: Any,
) -> Runnable[LanguageModelInput, AIMessage]:
"""Override bind_tools to return self for testing purposes."""
"""테스트 목적상 `bind_tools`를 오버라이드하여 자기 자신을 반환합니다."""
return self

View File

@@ -1,19 +1,22 @@
"""deepagents_acp 서버 동작을 검증하는 테스트들입니다."""
from contextlib import asynccontextmanager
from typing import Any
from acp.schema import NewSessionRequest, PromptRequest
from acp.schema import (
TextContentBlock,
AllowedOutcome,
NewSessionRequest,
PromptRequest,
RequestPermissionRequest,
RequestPermissionResponse,
AllowedOutcome,
TextContentBlock,
)
from deepagents_acp.server import DeepagentsACP
from dirty_equals import IsUUID
from langchain_core.messages import AIMessage, BaseMessage
from langchain_core.tools import tool
from langgraph.checkpoint.memory import InMemorySaver
from deepagents_acp.server import DeepagentsACP
from tests.chat_model import GenericFakeChatModel
@@ -68,13 +71,13 @@ async def deepagents_acp_test_context(
stream_delimiter: str | None = r"(\s)",
middleware: list[Any] | None = None,
):
"""Context manager for testing DeepagentsACP.
r"""Context manager for testing DeepagentsACP.
Args:
messages: List of messages for the fake model to return
prompt_request: The prompt request to send to the agent
tools: List of tools to provide to the agent (defaults to [])
stream_delimiter: How to chunk content when streaming (default: r"(\\s)" for whitespace)
stream_delimiter: How to chunk content when streaming (default: r"(\s)" for whitespace)
middleware: Optional middleware to add to the agent graph
Yields:
@@ -422,8 +425,8 @@ async def test_fake_chat_model_streaming() -> None:
async def test_human_in_the_loop_approval() -> None:
"""Test that DeepagentsACP handles HITL interrupts and permission requests correctly."""
from langchain.agents.middleware import HumanInTheLoopMiddleware
from deepagents.graph import create_deep_agent
from langchain.agents.middleware import HumanInTheLoopMiddleware
prompt_request = PromptRequest(
sessionId="", # Will be set below

View File

@@ -1,4 +1,4 @@
"""DeepAgents CLI - Interactive AI coding assistant."""
"""DeepAgents CLI - 대화형 AI 코딩 어시스턴트입니다."""
from deepagents_cli.main import cli_main

View File

@@ -1,4 +1,7 @@
"""Allow running the CLI as: python -m deepagents.cli."""
"""`python -m deepagents.cli` 형태로 CLI를 실행할 수 있게 하는 엔트리포인트입니다.
Allow running the CLI as: python -m deepagents.cli.
"""
from deepagents_cli.main import cli_main

View File

@@ -1,3 +1,6 @@
"""Version information for deepagents-cli."""
"""deepagents-cli 버전 정보를 제공합니다.
Version information for deepagents-cli.
"""
__version__ = "0.0.12"

View File

@@ -1,4 +1,6 @@
"""Agent management and creation for the CLI."""
"""deepagents-cli에서 에이전트를 생성하고 관리하는 로직입니다."""
# ruff: noqa: E501
import os
import shutil
@@ -25,9 +27,11 @@ from deepagents_cli.config import COLORS, config, console, get_default_coding_in
from deepagents_cli.integrations.sandbox_factory import get_default_working_dir
from deepagents_cli.shell import ShellMiddleware
DESCRIPTION_PREVIEW_LIMIT = 500
def list_agents() -> None:
"""List all available agents."""
"""사용 가능한 모든 에이전트를 나열합니다."""
agents_dir = settings.user_deepagents_dir
if not agents_dir.exists() or not any(agents_dir.iterdir()):
@@ -58,7 +62,7 @@ def list_agents() -> None:
def reset_agent(agent_name: str, source_agent: str | None = None) -> None:
"""Reset an agent to default or copy from another agent."""
"""에이전트를 기본값으로 리셋하거나 다른 에이전트 설정을 복사합니다."""
agents_dir = settings.user_deepagents_dir
agent_dir = agents_dir / agent_name
@@ -92,7 +96,7 @@ def reset_agent(agent_name: str, source_agent: str | None = None) -> None:
def get_system_prompt(assistant_id: str, sandbox_type: str | None = None) -> str:
"""Get the base system prompt for the agent.
"""에이전트의 기본 system prompt를 생성합니다.
Args:
assistant_id: The agent identifier for path references
@@ -190,7 +194,7 @@ The todo list is a planning tool - use it judiciously to avoid overwhelming the
def _format_write_file_description(
tool_call: ToolCall, _state: AgentState, _runtime: Runtime
) -> str:
"""Format write_file tool call for approval prompt."""
"""승인 프롬프트에 표시할 `write_file` 도구 호출 설명을 포맷팅합니다."""
args = tool_call["args"]
file_path = args.get("file_path", "unknown")
content = args.get("content", "")
@@ -204,7 +208,7 @@ def _format_write_file_description(
def _format_edit_file_description(
tool_call: ToolCall, _state: AgentState, _runtime: Runtime
) -> str:
"""Format edit_file tool call for approval prompt."""
"""승인 프롬프트에 표시할 `edit_file` 도구 호출 설명을 포맷팅합니다."""
args = tool_call["args"]
file_path = args.get("file_path", "unknown")
replace_all = bool(args.get("replace_all", False))
@@ -218,7 +222,7 @@ def _format_edit_file_description(
def _format_web_search_description(
tool_call: ToolCall, _state: AgentState, _runtime: Runtime
) -> str:
"""Format web_search tool call for approval prompt."""
"""승인 프롬프트에 표시할 `web_search` 도구 호출 설명을 포맷팅합니다."""
args = tool_call["args"]
query = args.get("query", "unknown")
max_results = args.get("max_results", 5)
@@ -229,7 +233,7 @@ def _format_web_search_description(
def _format_fetch_url_description(
tool_call: ToolCall, _state: AgentState, _runtime: Runtime
) -> str:
"""Format fetch_url tool call for approval prompt."""
"""승인 프롬프트에 표시할 `fetch_url` 도구 호출 설명을 포맷팅합니다."""
args = tool_call["args"]
url = args.get("url", "unknown")
timeout = args.get("timeout", 30)
@@ -238,7 +242,7 @@ def _format_fetch_url_description(
def _format_task_description(tool_call: ToolCall, _state: AgentState, _runtime: Runtime) -> str:
"""Format task (subagent) tool call for approval prompt.
"""승인 프롬프트에 표시할 `task`(서브에이전트) 도구 호출 설명을 포맷팅합니다.
The task tool signature is: task(description: str, subagent_type: str)
The description contains all instructions that will be sent to the subagent.
@@ -249,8 +253,8 @@ def _format_task_description(tool_call: ToolCall, _state: AgentState, _runtime:
# Truncate description if too long for display
description_preview = description
if len(description) > 500:
description_preview = description[:500] + "..."
if len(description) > DESCRIPTION_PREVIEW_LIMIT:
description_preview = description[:DESCRIPTION_PREVIEW_LIMIT] + "..."
return (
f"Subagent Type: {subagent_type}\n\n"
@@ -263,21 +267,21 @@ def _format_task_description(tool_call: ToolCall, _state: AgentState, _runtime:
def _format_shell_description(tool_call: ToolCall, _state: AgentState, _runtime: Runtime) -> str:
"""Format shell tool call for approval prompt."""
"""승인 프롬프트에 표시할 `shell` 도구 호출 설명을 포맷팅합니다."""
args = tool_call["args"]
command = args.get("command", "N/A")
return f"Shell Command: {command}\nWorking Directory: {Path.cwd()}"
def _format_execute_description(tool_call: ToolCall, _state: AgentState, _runtime: Runtime) -> str:
"""Format execute tool call for approval prompt."""
"""승인 프롬프트에 표시할 `execute` 도구 호출 설명을 포맷팅합니다."""
args = tool_call["args"]
command = args.get("command", "N/A")
return f"Execute Command: {command}\nLocation: Remote Sandbox"
def _add_interrupt_on() -> dict[str, InterruptOnConfig]:
"""Configure human-in-the-loop interrupt_on settings for destructive tools."""
"""파괴적인 도구에 대한 HITL(Human-In-The-Loop) interrupt_on 설정을 구성합니다."""
shell_interrupt_config: InterruptOnConfig = {
"allowed_decisions": ["approve", "reject"],
"description": _format_shell_description,
@@ -337,7 +341,7 @@ def create_cli_agent(
enable_shell: bool = True,
checkpointer: BaseCheckpointSaver | None = None,
) -> tuple[Pregel, CompositeBackend]:
"""Create a CLI-configured agent with flexible options.
"""옵션을 유연하게 조합할 수 있는 CLI용 에이전트를 생성합니다.
This is the main entry point for creating a deepagents CLI agent, usable both
internally and from external code (e.g., benchmarking frameworks, Harbor).
@@ -442,12 +446,7 @@ def create_cli_agent(
system_prompt = get_system_prompt(assistant_id=assistant_id, sandbox_type=sandbox_type)
# Configure interrupt_on based on auto_approve setting
if auto_approve:
# No interrupts - all tools run automatically
interrupt_on = {}
else:
# Full HITL for destructive operations
interrupt_on = _add_interrupt_on()
interrupt_on = {} if auto_approve else _add_interrupt_on()
composite_backend = CompositeBackend(
default=backend,

View File

@@ -1,9 +1,13 @@
"""Textual UI application for deepagents-cli."""
"""deepagents-cli의 Textual UI 애플리케이션입니다.
Textual UI application for deepagents-cli.
"""
from __future__ import annotations
import asyncio
import contextlib
import logging
import subprocess
import uuid
from pathlib import Path
@@ -31,6 +35,10 @@ from deepagents_cli.widgets.messages import (
from deepagents_cli.widgets.status import StatusBar
from deepagents_cli.widgets.welcome import WelcomeBanner
logger = logging.getLogger(__name__)
TOKENS_K_THRESHOLD = 1000
if TYPE_CHECKING:
from langgraph.pregel import Pregel
from textual.app import ComposeResult
@@ -382,41 +390,49 @@ class DeepAgentsApp(App):
"""
cmd = command.lower().strip()
if cmd in ("/quit", "/exit", "/q"):
if cmd in {"/quit", "/exit", "/q"}:
self.exit()
elif cmd == "/help":
await self._mount_message(UserMessage(command))
return
await self._mount_message(UserMessage(command))
if cmd == "/help":
await self._mount_message(
SystemMessage("Commands: /quit, /clear, /tokens, /threads, /help")
)
elif cmd == "/clear":
return
if cmd == "/clear":
await self._clear_messages()
# Reset thread to start fresh conversation
if self._session_state:
new_thread_id = self._session_state.reset_thread()
await self._mount_message(SystemMessage(f"Started new session: {new_thread_id}"))
elif cmd == "/threads":
await self._mount_message(UserMessage(command))
return
if cmd == "/threads":
if self._session_state:
await self._mount_message(
SystemMessage(f"Current session: {self._session_state.thread_id}")
)
else:
await self._mount_message(SystemMessage("No active session"))
elif cmd == "/tokens":
await self._mount_message(UserMessage(command))
return
if cmd == "/tokens":
if self._token_tracker and self._token_tracker.current_context > 0:
count = self._token_tracker.current_context
if count >= 1000:
formatted = f"{count / 1000:.1f}K"
else:
formatted = str(count)
formatted = (
f"{count / TOKENS_K_THRESHOLD:.1f}K"
if count >= TOKENS_K_THRESHOLD
else str(count)
)
await self._mount_message(SystemMessage(f"Current context: {formatted} tokens"))
else:
await self._mount_message(SystemMessage("No token usage yet"))
else:
await self._mount_message(UserMessage(command))
await self._mount_message(SystemMessage(f"Unknown command: {cmd}"))
return
await self._mount_message(SystemMessage(f"Unknown command: {cmd}"))
async def _handle_user_message(self, message: str) -> None:
"""Handle a user message to send to the agent.
@@ -572,8 +588,8 @@ class DeepAgentsApp(App):
if tool_msg.has_output:
tool_msg.toggle_output()
return
except Exception:
pass
except Exception as err: # noqa: BLE001
logger.debug("Failed to toggle tool output.", exc_info=err)
# Approval menu action handlers (delegated from App-level bindings)
# NOTE: These only activate when approval widget is pending AND input is not focused

View File

@@ -1,14 +1,21 @@
"""Clipboard utilities for deepagents-cli."""
"""deepagents-cli에서 클립보드 연동을 위한 유틸리티입니다.
Clipboard utilities for deepagents-cli.
"""
from __future__ import annotations
import base64
import logging
import os
from pathlib import Path
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from textual.app import App
logger = logging.getLogger(__name__)
_PREVIEW_MAX_LENGTH = 40
@@ -19,7 +26,7 @@ def _copy_osc52(text: str) -> None:
if os.environ.get("TMUX"):
osc52_seq = f"\033Ptmux;\033{osc52_seq}\033\\"
with open("/dev/tty", "w") as tty:
with Path("/dev/tty").open("w") as tty:
tty.write(osc52_seq)
tty.flush()
@@ -48,7 +55,8 @@ def copy_selection_to_clipboard(app: App) -> None:
try:
result = widget.get_selection(selection)
except Exception:
except (AttributeError, ValueError, TypeError) as err:
logger.debug("Failed to read selection from widget.", exc_info=err)
continue
if not result:
@@ -77,14 +85,16 @@ def copy_selection_to_clipboard(app: App) -> None:
for copy_fn in copy_methods:
try:
copy_fn(combined_text)
except (OSError, RuntimeError, ValueError) as err:
logger.debug("Clipboard copy method failed.", exc_info=err)
continue
else:
app.notify(
f'"{_shorten_preview(selected_texts)}" copied',
severity="information",
timeout=2,
)
return
except Exception:
continue
# If all methods fail, still notify but warn
app.notify(

View File

@@ -1,4 +1,7 @@
"""Configuration, constants, and model creation for the CLI."""
"""CLI 설정/상수/모델 생성 관련 로직입니다.
Configuration, constants, and model creation for the CLI.
"""
import os
import re
@@ -24,7 +27,7 @@ if _deepagents_project:
os.environ["LANGSMITH_PROJECT"] = _deepagents_project
# Now safe to import LangChain modules
from langchain_core.language_models import BaseChatModel
from langchain_core.language_models import BaseChatModel # noqa: E402
# Color scheme
COLORS = {
@@ -115,12 +118,12 @@ def _find_project_agent_md(project_root: Path) -> list[Path]:
"""
paths = []
# Check .deepagents/AGENTS.md (preferred)
# Prefer the `.deepagents/AGENTS.md` location when present.
deepagents_md = project_root / ".deepagents" / "AGENTS.md"
if deepagents_md.exists():
paths.append(deepagents_md)
# Check root AGENTS.md (fallback, but also include if both exist)
# Also look for a repository-root `AGENTS.md` (and include both if both exist).
root_md = project_root / "AGENTS.md"
if root_md.exists():
paths.append(root_md)
@@ -377,7 +380,13 @@ settings = Settings.from_environment()
class SessionState:
"""Holds mutable session state (auto-approve mode, etc)."""
def __init__(self, auto_approve: bool = False, no_splash: bool = False) -> None:
def __init__(self, *, auto_approve: bool = False, no_splash: bool = False) -> None:
"""세션 상태를 초기화합니다.
Args:
auto_approve: 도구 호출을 자동 승인할지 여부
no_splash: 시작 시 스플래시 화면을 숨길지 여부
"""
self.auto_approve = auto_approve
self.no_splash = no_splash
self.exit_hint_until: float | None = None
@@ -439,7 +448,8 @@ def create_model(model_name_override: str | None = None) -> BaseChatModel:
provider = _detect_provider(model_name_override)
if not provider:
console.print(
f"[bold red]Error:[/bold red] Could not detect provider from model name: {model_name_override}"
"[bold red]Error:[/bold red] Could not detect provider from model name: "
f"{model_name_override}"
)
console.print("\nSupported model name patterns:")
console.print(" - OpenAI: gpt-*, o1-*, o3-*")
@@ -450,17 +460,20 @@ def create_model(model_name_override: str | None = None) -> BaseChatModel:
# Check if API key for detected provider is available
if provider == "openai" and not settings.has_openai:
console.print(
f"[bold red]Error:[/bold red] Model '{model_name_override}' requires OPENAI_API_KEY"
"[bold red]Error:[/bold red] Model "
f"'{model_name_override}' requires OPENAI_API_KEY"
)
sys.exit(1)
elif provider == "anthropic" and not settings.has_anthropic:
console.print(
f"[bold red]Error:[/bold red] Model '{model_name_override}' requires ANTHROPIC_API_KEY"
"[bold red]Error:[/bold red] Model "
f"'{model_name_override}' requires ANTHROPIC_API_KEY"
)
sys.exit(1)
elif provider == "google" and not settings.has_google:
console.print(
f"[bold red]Error:[/bold red] Model '{model_name_override}' requires GOOGLE_API_KEY"
"[bold red]Error:[/bold red] Model "
f"'{model_name_override}' requires GOOGLE_API_KEY"
)
sys.exit(1)
@@ -510,3 +523,6 @@ def create_model(model_name_override: str | None = None) -> BaseChatModel:
temperature=0,
max_tokens=None,
)
msg = f"Unsupported model provider: {provider}"
raise RuntimeError(msg)

View File

@@ -1,8 +1,12 @@
"""Helpers for tracking file operations and computing diffs for CLI display."""
"""CLI 표시를 위해 파일 작업을 추적하고 diff를 계산하는 유틸리티입니다.
Helpers for tracking file operations and computing diffs for CLI display.
"""
from __future__ import annotations
import difflib
import logging
from dataclasses import dataclass, field
from pathlib import Path
from typing import TYPE_CHECKING, Any, Literal
@@ -13,9 +17,12 @@ from deepagents_cli.config import settings
if TYPE_CHECKING:
from deepagents.backends.protocol import BACKEND_TYPES
from langchain_core.messages import ToolMessage
FileOpStatus = Literal["pending", "success", "error"]
logger = logging.getLogger(__name__)
@dataclass
class ApprovalPreview:
@@ -249,6 +256,7 @@ class FileOpTracker:
def start_operation(
self, tool_name: str, args: dict[str, Any], tool_call_id: str | None
) -> None:
"""파일 도구 호출을 추적하기 위한 operation을 시작합니다."""
if tool_name not in {"read_file", "write_file", "edit_file"}:
return
path_str = str(args.get("file_path") or args.get("path") or "")
@@ -272,7 +280,8 @@ class FileOpTracker:
record.before_content = responses[0].content.decode("utf-8")
else:
record.before_content = ""
except Exception:
except Exception as err: # noqa: BLE001
logger.debug("Failed to download file content before operation.", exc_info=err)
record.before_content = ""
elif record.physical_path:
record.before_content = _safe_read(record.physical_path) or ""
@@ -303,12 +312,19 @@ class FileOpTracker:
record.before_content = responses[0].content.decode("utf-8")
else:
record.before_content = ""
except Exception:
except Exception as err: # noqa: BLE001
logger.debug(
"Failed to download file content before operation.",
exc_info=err,
)
record.before_content = ""
elif record.physical_path:
record.before_content = _safe_read(record.physical_path) or ""
def complete_with_message(self, tool_message: Any) -> FileOperationRecord | None:
def complete_with_message( # noqa: PLR0912, PLR0915
self, tool_message: ToolMessage
) -> FileOperationRecord | None:
"""도구 실행 결과(ToolMessage)를 사용해 operation을 완료 처리합니다."""
tool_call_id = getattr(tool_message, "tool_call_id", None)
record = self.active.get(tool_call_id)
if record is None:
@@ -430,7 +446,8 @@ class FileOpTracker:
record.after_content = None
else:
record.after_content = None
except Exception:
except Exception as err: # noqa: BLE001
logger.debug("Failed to download file content after operation.", exc_info=err)
record.after_content = None
else:
# Fallback: direct filesystem read when no backend provided

View File

@@ -1,14 +1,19 @@
"""Utilities for handling image paste from clipboard."""
"""클립보드에서 이미지 붙여넣기(paste)를 처리하는 유틸리티입니다.
Utilities for handling image paste from clipboard.
"""
import base64
import contextlib
import io
import os
import shutil
import subprocess
import sys
import tempfile
from dataclasses import dataclass
from pathlib import Path
from PIL import Image
from PIL import Image, UnidentifiedImageError
@dataclass
@@ -54,33 +59,36 @@ def _get_macos_clipboard_image() -> ImageData | None:
ImageData if an image is found, None otherwise
"""
# Try pngpaste first (fast if installed)
try:
result = subprocess.run(
["pngpaste", "-"],
capture_output=True,
check=False,
timeout=2,
)
if result.returncode == 0 and result.stdout:
# Successfully got PNG data
try:
Image.open(io.BytesIO(result.stdout)) # Validate it's a real image
base64_data = base64.b64encode(result.stdout).decode("utf-8")
return ImageData(
base64_data=base64_data,
format="png", # 'pngpaste -' always outputs PNG
placeholder="[image]",
)
except Exception:
pass # Invalid image data
except (FileNotFoundError, subprocess.TimeoutExpired):
pass # pngpaste not installed or timed out
pngpaste_path = shutil.which("pngpaste")
if pngpaste_path:
try:
result = subprocess.run( # noqa: S603
[pngpaste_path, "-"],
capture_output=True,
check=False,
timeout=2,
)
if result.returncode == 0 and result.stdout:
# Successfully got PNG data
try:
Image.open(io.BytesIO(result.stdout)) # Validate it's a real image
except (UnidentifiedImageError, OSError):
pass # Invalid image data
else:
base64_data = base64.b64encode(result.stdout).decode("utf-8")
return ImageData(
base64_data=base64_data,
format="png", # 'pngpaste -' always outputs PNG
placeholder="[image]",
)
except subprocess.TimeoutExpired:
pass # pngpaste timed out
# Fallback to osascript with temp file (built-in but slower)
return _get_clipboard_via_osascript()
def _get_clipboard_via_osascript() -> ImageData | None:
def _get_clipboard_via_osascript() -> ImageData | None: # noqa: PLR0911
"""Get clipboard image via osascript using a temp file.
osascript outputs data in a special format that can't be captured as raw binary,
@@ -89,14 +97,18 @@ def _get_clipboard_via_osascript() -> ImageData | None:
Returns:
ImageData if an image is found, None otherwise
"""
osascript_path = shutil.which("osascript")
if not osascript_path:
return None
# Create a temp file for the image
fd, temp_path = tempfile.mkstemp(suffix=".png")
os.close(fd)
with tempfile.NamedTemporaryFile(suffix=".png", delete=False) as tmp:
temp_file = Path(tmp.name)
try:
# First check if clipboard has PNG data
check_result = subprocess.run(
["osascript", "-e", "clipboard info"],
check_result = subprocess.run( # noqa: S603
[osascript_path, "-e", "clipboard info"],
capture_output=True,
check=False,
timeout=2,
@@ -115,7 +127,7 @@ def _get_clipboard_via_osascript() -> ImageData | None:
if "pngf" in clipboard_info:
get_script = f"""
set pngData to the clipboard as «class PNGf»
set theFile to open for access POSIX file "{temp_path}" with write permission
set theFile to open for access POSIX file "{temp_file.as_posix()}" with write permission
write pngData to theFile
close access theFile
return "success"
@@ -123,14 +135,14 @@ def _get_clipboard_via_osascript() -> ImageData | None:
else:
get_script = f"""
set tiffData to the clipboard as TIFF picture
set theFile to open for access POSIX file "{temp_path}" with write permission
set theFile to open for access POSIX file "{temp_file.as_posix()}" with write permission
write tiffData to theFile
close access theFile
return "success"
"""
result = subprocess.run(
["osascript", "-e", get_script],
result = subprocess.run( # noqa: S603
[osascript_path, "-e", get_script],
capture_output=True,
check=False,
timeout=3,
@@ -141,12 +153,11 @@ def _get_clipboard_via_osascript() -> ImageData | None:
return None
# Check if file was created and has content
if not os.path.exists(temp_path) or os.path.getsize(temp_path) == 0:
if not temp_file.exists() or temp_file.stat().st_size == 0:
return None
# Read and validate the image
with open(temp_path, "rb") as f:
image_data = f.read()
image_data = temp_file.read_bytes()
try:
image = Image.open(io.BytesIO(image_data))
@@ -161,17 +172,15 @@ def _get_clipboard_via_osascript() -> ImageData | None:
format="png",
placeholder="[image]",
)
except Exception:
except (UnidentifiedImageError, OSError):
return None
except (subprocess.TimeoutExpired, OSError):
return None
finally:
# Clean up temp file
try:
os.unlink(temp_path)
except OSError:
pass
with contextlib.suppress(OSError):
temp_file.unlink()
def encode_image_to_base64(image_bytes: bytes) -> str:
@@ -203,7 +212,6 @@ def create_multimodal_content(text: str, images: list[ImageData]) -> list[dict]:
content_blocks.append({"type": "text", "text": text})
# Add image blocks
for image in images:
content_blocks.append(image.to_message_content())
content_blocks.extend([image.to_message_content() for image in images])
return content_blocks

View File

@@ -1,11 +1,16 @@
"""Input handling, completers, and prompt session for the CLI."""
"""CLI 입력 처리(완성/프롬프트 세션 포함)를 담당합니다.
Input handling, completers, and prompt session for the CLI.
"""
from __future__ import annotations
import asyncio
import os
import re
import time
from collections.abc import Callable
from pathlib import Path
from typing import TYPE_CHECKING
from prompt_toolkit import PromptSession
from prompt_toolkit.completion import (
@@ -22,6 +27,12 @@ from prompt_toolkit.key_binding import KeyBindings
from .config import COLORS, COMMANDS, SessionState, console
from .image_utils import ImageData, get_clipboard_image
if TYPE_CHECKING:
from collections.abc import Callable, Iterator
from prompt_toolkit.completion.base import CompleteEvent
from prompt_toolkit.key_binding.key_processor import KeyPressEvent
# Regex patterns for context-aware completion
AT_MENTION_RE = re.compile(r"@(?P<path>(?:[^\s@]|(?<=\\)\s)*)$")
SLASH_COMMAND_RE = re.compile(r"^/(?P<command>[a-z]*)$")
@@ -33,6 +44,7 @@ class ImageTracker:
"""Track pasted images in the current conversation."""
def __init__(self) -> None:
"""이미지 트래커를 초기화합니다."""
self.images: list[ImageData] = []
self.next_id = 1
@@ -65,13 +77,16 @@ class FilePathCompleter(Completer):
"""Activate filesystem completion only when cursor is after '@'."""
def __init__(self) -> None:
"""파일 경로 자동완성 컴플리터를 초기화합니다."""
self.path_completer = PathCompleter(
expanduser=True,
min_input_len=0,
only_directories=False,
)
def get_completions(self, document, complete_event):
def get_completions(
self, document: Document, complete_event: CompleteEvent
) -> Iterator[Completion]:
"""Get file path completions when @ is detected."""
text = document.text_before_cursor
@@ -112,7 +127,9 @@ class FilePathCompleter(Completer):
class CommandCompleter(Completer):
"""Activate command completion only when line starts with '/'."""
def get_completions(self, document, _complete_event):
def get_completions(
self, document: Document, _complete_event: CompleteEvent
) -> Iterator[Completion]:
"""Get command completions when / is at the start."""
text = document.text_before_cursor
@@ -155,8 +172,8 @@ def parse_file_mentions(text: str) -> tuple[str, list[Path]]:
files.append(path)
else:
console.print(f"[yellow]Warning: File not found: {match}[/yellow]")
except Exception as e:
console.print(f"[yellow]Warning: Invalid path {match}: {e}[/yellow]")
except OSError as err:
console.print(f"[yellow]Warning: Invalid path {match}: {err}[/yellow]")
return text, files
@@ -221,7 +238,7 @@ def get_bottom_toolbar(
return toolbar
def create_prompt_session(
def create_prompt_session( # noqa: PLR0915
_assistant_id: str, session_state: SessionState, image_tracker: ImageTracker | None = None
) -> PromptSession:
"""Create a configured PromptSession with all features."""
@@ -233,7 +250,7 @@ def create_prompt_session(
kb = KeyBindings()
@kb.add("c-c")
def _(event) -> None:
def _(event: KeyPressEvent) -> None:
"""Require double Ctrl+C within a short window to exit."""
app = event.app
now = time.monotonic()
@@ -272,7 +289,7 @@ def create_prompt_session(
# Bind Ctrl+T to toggle auto-approve
@kb.add("c-t")
def _(event) -> None:
def _(event: KeyPressEvent) -> None:
"""Toggle auto-approve mode."""
session_state.toggle_auto_approve()
# Force UI refresh to update toolbar
@@ -282,7 +299,9 @@ def create_prompt_session(
if image_tracker:
from prompt_toolkit.keys import Keys
def _handle_paste_with_image_check(event, pasted_text: str = "") -> None:
def _handle_paste_with_image_check(
event: KeyPressEvent, pasted_text: str = ""
) -> None:
"""Check clipboard for image, otherwise insert pasted text."""
# Try to get an image from clipboard
clipboard_image = get_clipboard_image()
@@ -302,20 +321,20 @@ def create_prompt_session(
event.current_buffer.insert_text(clipboard_data.text)
@kb.add(Keys.BracketedPaste)
def _(event) -> None:
def _(event: KeyPressEvent) -> None:
"""Handle bracketed paste (Cmd+V on macOS) - check for images first."""
# Bracketed paste provides the pasted text in event.data
pasted_text = event.data if hasattr(event, "data") else ""
_handle_paste_with_image_check(event, pasted_text)
@kb.add("c-v")
def _(event) -> None:
def _(event: KeyPressEvent) -> None:
"""Handle Ctrl+V paste - check for images first."""
_handle_paste_with_image_check(event)
# Bind regular Enter to submit (intuitive behavior)
@kb.add("enter")
def _(event) -> None:
def _(event: KeyPressEvent) -> None:
"""Enter submits the input, unless completion menu is active."""
buffer = event.current_buffer
@@ -344,19 +363,19 @@ def create_prompt_session(
# Alt+Enter for newlines (press ESC then Enter, or Option+Enter on Mac)
@kb.add("escape", "enter")
def _(event) -> None:
def _(event: KeyPressEvent) -> None:
"""Alt+Enter inserts a newline for multi-line input."""
event.current_buffer.insert_text("\n")
# Ctrl+E to open in external editor
@kb.add("c-e")
def _(event) -> None:
def _(event: KeyPressEvent) -> None:
"""Open the current input in an external editor (nano by default)."""
event.current_buffer.open_in_editor()
# Backspace handler to retrigger completions and delete image tags as units
@kb.add("backspace")
def _(event) -> None:
def _(event: KeyPressEvent) -> None:
"""Handle backspace: delete image tags as single unit, retrigger completion."""
buffer = event.current_buffer
text_before = buffer.document.text_before_cursor

View File

@@ -1 +1,4 @@
"""Sandbox integrations for DeepAgents CLI."""
"""DeepAgents CLI의 샌드박스(integrations) 모듈입니다.
Sandbox integrations for DeepAgents CLI.
"""

View File

@@ -1,4 +1,7 @@
"""Daytona sandbox backend implementation."""
"""Daytona 샌드박스 백엔드 구현입니다.
Daytona sandbox backend implementation.
"""
from __future__ import annotations
@@ -70,8 +73,9 @@ class DaytonaBackend(BaseSandbox):
List of FileDownloadResponse objects, one per input path.
Response order matches input order.
TODO: Map Daytona API error strings to standardized FileOperationError codes.
Currently only implements happy path.
Note: Daytona API 에러 문자열을 표준화된 FileOperationError 코드로 매핑하는 작업은
추후 보완합니다.
현재는 정상(happy path) 위주로만 구현되어 있습니다.
"""
from daytona import FileDownloadRequest
@@ -80,12 +84,12 @@ class DaytonaBackend(BaseSandbox):
daytona_responses = self._sandbox.fs.download_files(download_requests)
# Convert Daytona results to our response format
# TODO: Map resp.error to standardized error codes when available
# NOTE: resp.error를 표준화된 error code로 매핑하는 작업은 추후 보완합니다.
return [
FileDownloadResponse(
path=resp.source,
content=resp.result,
error=None, # TODO: map resp.error to FileOperationError
error=None, # NOTE: resp.error -> FileOperationError 매핑은 추후 보완
)
for resp in daytona_responses
]
@@ -104,14 +108,18 @@ class DaytonaBackend(BaseSandbox):
List of FileUploadResponse objects, one per input file.
Response order matches input order.
TODO: Map Daytona API error strings to standardized FileOperationError codes.
Currently only implements happy path.
Note: Daytona API 에러 문자열을 표준화된 FileOperationError 코드로 매핑하는 작업은
추후 보완합니다.
현재는 정상(happy path) 위주로만 구현되어 있습니다.
"""
from daytona import FileUpload
# Create batch upload request using Daytona's native batch API
upload_requests = [FileUpload(source=content, destination=path) for path, content in files]
upload_requests = [
FileUpload(source=content, destination=path) for path, content in files
]
self._sandbox.fs.upload_files(upload_requests)
# TODO: Check if Daytona returns error info and map to FileOperationError codes
# NOTE: Daytona가 error 정보를 제공하는 경우, FileOperationError 코드로 매핑하는 작업은
# 추후 보완합니다.
return [FileUploadResponse(path=path, error=None) for path, _ in files]

View File

@@ -1,4 +1,7 @@
"""Modal sandbox backend implementation."""
"""Modal 샌드박스 백엔드 구현입니다.
Modal sandbox backend implementation.
"""
from __future__ import annotations

View File

@@ -1,19 +1,21 @@
"""BackendProtocol implementation for Runloop."""
"""Runloop용 `BackendProtocol` 구현입니다.
BackendProtocol implementation for Runloop.
"""
try:
import runloop_api_client
from runloop_api_client import Runloop
except ImportError:
msg = (
"runloop_api_client package is required for RunloopBackend. "
"Install with `pip install runloop_api_client`."
)
raise ImportError(msg)
raise ImportError(msg) from None
import os
from deepagents.backends.protocol import ExecuteResponse, FileDownloadResponse, FileUploadResponse
from deepagents.backends.sandbox import BaseSandbox
from runloop_api_client import Runloop
class RunloopBackend(BaseSandbox):

View File

@@ -1,5 +1,9 @@
"""Sandbox lifecycle management with context managers."""
"""컨텍스트 매니저 기반 샌드박스 라이프사이클 관리 유틸리티입니다.
Sandbox lifecycle management with context managers.
"""
import logging
import os
import shlex
import string
@@ -12,6 +16,8 @@ from deepagents.backends.protocol import SandboxBackendProtocol
from deepagents_cli.config import console
logger = logging.getLogger(__name__)
def _run_sandbox_setup(backend: SandboxBackendProtocol, setup_script_path: str) -> None:
"""Run users setup script in sandbox with env var expansion.
@@ -93,8 +99,8 @@ def create_modal_sandbox(
process.wait()
if process.returncode == 0:
break
except Exception:
pass
except Exception as err: # noqa: BLE001
logger.debug("Modal sandbox not ready yet.", exc_info=err)
time.sleep(2)
else:
# Timeout - cleanup and fail
@@ -116,8 +122,8 @@ def create_modal_sandbox(
console.print(f"[dim]Terminating Modal sandbox {sandbox_id}...[/dim]")
sandbox.terminate()
console.print(f"[dim]✓ Modal sandbox {sandbox_id} terminated[/dim]")
except Exception as e:
console.print(f"[yellow]⚠ Cleanup failed: {e}[/yellow]")
except Exception as err: # noqa: BLE001
console.print(f"[yellow]⚠ Cleanup failed: {err}[/yellow]")
@contextmanager
@@ -188,8 +194,8 @@ def create_runloop_sandbox(
console.print(f"[dim]Shutting down Runloop devbox {sandbox_id}...[/dim]")
client.devboxes.shutdown(id=devbox.id)
console.print(f"[dim]✓ Runloop devbox {sandbox_id} terminated[/dim]")
except Exception as e:
console.print(f"[yellow]⚠ Cleanup failed: {e}[/yellow]")
except Exception as err: # noqa: BLE001
console.print(f"[yellow]⚠ Cleanup failed: {err}[/yellow]")
@contextmanager
@@ -238,8 +244,8 @@ def create_daytona_sandbox(
result = sandbox.process.exec("echo ready", timeout=5)
if result.exit_code == 0:
break
except Exception:
pass
except Exception as err: # noqa: BLE001
logger.debug("Daytona sandbox not ready yet.", exc_info=err)
time.sleep(2)
else:
try:
@@ -262,8 +268,8 @@ def create_daytona_sandbox(
try:
sandbox.delete()
console.print(f"[dim]✓ Daytona sandbox {sandbox_id} terminated[/dim]")
except Exception as e:
console.print(f"[yellow]⚠ Cleanup failed: {e}[/yellow]")
except Exception as err: # noqa: BLE001
console.print(f"[yellow]⚠ Cleanup failed: {err}[/yellow]")
_PROVIDER_TO_WORKING_DIR = {

View File

@@ -1,4 +1,7 @@
"""Main entry point and CLI loop for deepagents."""
"""deepagents CLI의 메인 엔트리포인트 및 루프입니다.
Main entry point and CLI loop for deepagents.
"""
# ruff: noqa: T201
import argparse
@@ -227,8 +230,8 @@ async def run_textual_cli_async(
cwd=Path.cwd(),
thread_id=thread_id,
)
except Exception as e:
console.print(f"[red]❌ Failed to create agent: {e}[/red]")
except Exception as err: # noqa: BLE001
console.print(f"[red]❌ Failed to create agent: {err}[/red]")
sys.exit(1)
finally:
# Clean up sandbox if we created one
@@ -237,7 +240,7 @@ async def run_textual_cli_async(
sandbox_cm.__exit__(None, None, None)
def cli_main() -> None:
def cli_main() -> None: # noqa: PLR0912, PLR0915
"""Entry point for console script."""
# Fix for gRPC fork issue on macOS
# https://github.com/grpc/grpc/issues/37642

View File

@@ -1,10 +1,10 @@
"""Utilities for project root detection and project-specific configuration."""
"""프로젝트 루트 탐지 및 프로젝트별 설정을 위한 유틸리티입니다."""
from pathlib import Path
def find_project_root(start_path: Path | None = None) -> Path | None:
"""Find the project root by looking for .git directory.
"""`.git` 디렉토리를 기준으로 프로젝트 루트를 찾습니다.
Walks up the directory tree from start_path (or cwd) looking for a .git
directory, which indicates the project root.
@@ -17,7 +17,7 @@ def find_project_root(start_path: Path | None = None) -> Path | None:
"""
current = Path(start_path or Path.cwd()).resolve()
# Walk up the directory tree
# 디렉토리 트리를 위로 올라가며 탐색
for parent in [current, *list(current.parents)]:
git_dir = parent / ".git"
if git_dir.exists():
@@ -27,7 +27,7 @@ def find_project_root(start_path: Path | None = None) -> Path | None:
def find_project_agent_md(project_root: Path) -> list[Path]:
"""Find project-specific agent.md file(s).
"""프로젝트 전용 `agent.md` 파일을 찾습니다(복수 가능).
Checks two locations and returns ALL that exist:
1. project_root/.deepagents/agent.md
@@ -43,12 +43,12 @@ def find_project_agent_md(project_root: Path) -> list[Path]:
"""
paths = []
# Check .deepagents/agent.md (preferred)
# .deepagents/agent.md 확인(우선)
deepagents_md = project_root / ".deepagents" / "agent.md"
if deepagents_md.exists():
paths.append(deepagents_md)
# Check root agent.md (fallback, but also include if both exist)
# 루트 agent.md 확인(폴백이지만 둘 다 있으면 함께 포함)
root_md = project_root / "agent.md"
if root_md.exists():
paths.append(root_md)

View File

@@ -1,4 +1,7 @@
"""Thread management using LangGraph's built-in checkpoint persistence."""
"""LangGraph 체크포인트 저장 기능을 사용한 스레드/세션 관리입니다.
Thread management using LangGraph's built-in checkpoint persistence.
"""
import uuid
from collections.abc import AsyncIterator

View File

@@ -1,4 +1,7 @@
"""Simplified middleware that exposes a basic shell tool to agents."""
"""에이전트에 기본 셸 도구를 노출하는 간단한 미들웨어입니다.
Simplified middleware that exposes a basic shell tool to agents.
"""
from __future__ import annotations
@@ -89,7 +92,7 @@ class ShellMiddleware(AgentMiddleware[AgentState, Any]):
raise ToolException(msg)
try:
result = subprocess.run(
result = subprocess.run( # noqa: S602
command,
check=False,
shell=True,
@@ -106,8 +109,7 @@ class ShellMiddleware(AgentMiddleware[AgentState, Any]):
output_parts.append(result.stdout)
if result.stderr:
stderr_lines = result.stderr.strip().split("\n")
for line in stderr_lines:
output_parts.append(f"[stderr] {line}")
output_parts.extend([f"[stderr] {line}" for line in stderr_lines])
output = "\n".join(output_parts) if output_parts else "<no output>"

View File

@@ -1,4 +1,4 @@
"""Skills module for deepagents CLI.
"""deepagents CLI에서 스킬(skills) 관리를 위한 모듈입니다.
Public API:
- execute_skills_command: Execute skills subcommands (list/create/info)

View File

@@ -1,4 +1,4 @@
"""CLI commands for skill management.
"""CLI에서 스킬(skills)을 관리하기 위한 커맨드들입니다.
These commands are registered with the CLI via cli.py:
- deepagents skills list --agent <agent> [--project]
@@ -9,7 +9,6 @@ These commands are registered with the CLI via cli.py:
import argparse
import re
from pathlib import Path
from typing import Any
from deepagents_cli.config import COLORS, Settings, console
from deepagents_cli.skills.load import list_skills
@@ -71,23 +70,15 @@ def _validate_skill_path(skill_dir: Path, base_dir: Path) -> tuple[bool, str]:
# Resolve both paths to their canonical form
resolved_skill = skill_dir.resolve()
resolved_base = base_dir.resolve()
# Check if skill_dir is within base_dir
# Use is_relative_to if available (Python 3.9+), otherwise use string comparison
if hasattr(resolved_skill, "is_relative_to"):
if not resolved_skill.is_relative_to(resolved_base):
return False, f"Skill directory must be within {base_dir}"
else:
# Fallback for older Python versions
try:
resolved_skill.relative_to(resolved_base)
except ValueError:
return False, f"Skill directory must be within {base_dir}"
return True, ""
except (OSError, RuntimeError) as e:
return False, f"Invalid path: {e}"
# Check if skill_dir is within base_dir (Python 3.9+)
if not resolved_skill.is_relative_to(resolved_base):
return False, f"Skill directory must be within {base_dir}"
return True, ""
def _list(agent: str, *, project: bool = False) -> None:
"""List all available skills for the specified agent.
@@ -114,11 +105,17 @@ def _list(agent: str, *, project: bool = False) -> None:
if not project_skills_dir.exists() or not any(project_skills_dir.iterdir()):
console.print("[yellow]No project skills found.[/yellow]")
console.print(
f"[dim]Project skills will be created in {project_skills_dir}/ when you add them.[/dim]",
(
f"[dim]Project skills will be created in {project_skills_dir}/ "
"when you add them.[/dim]"
),
style=COLORS["dim"],
)
console.print(
"\n[dim]Create a project skill:\n deepagents skills create my-skill --project[/dim]",
(
"\n[dim]Create a project skill:\n"
" deepagents skills create my-skill --project[/dim]"
),
style=COLORS["dim"],
)
return
@@ -127,12 +124,18 @@ def _list(agent: str, *, project: bool = False) -> None:
console.print("\n[bold]Project Skills:[/bold]\n", style=COLORS["primary"])
else:
# Load both user and project skills
skills = list_skills(user_skills_dir=user_skills_dir, project_skills_dir=project_skills_dir)
skills = list_skills(
user_skills_dir=user_skills_dir,
project_skills_dir=project_skills_dir,
)
if not skills:
console.print("[yellow]No skills found.[/yellow]")
console.print(
"[dim]Skills will be created in ~/.deepagents/agent/skills/ when you add them.[/dim]",
(
"[dim]Skills will be created in ~/.deepagents/agent/skills/ "
"when you add them.[/dim]"
),
style=COLORS["dim"],
)
console.print(
@@ -170,7 +173,7 @@ def _list(agent: str, *, project: bool = False) -> None:
console.print()
def _create(skill_name: str, agent: str, project: bool = False) -> None:
def _create(skill_name: str, agent: str, *, project: bool = False) -> None:
"""Create a new skill with a template SKILL.md file.
Args:
@@ -325,7 +328,8 @@ def _info(skill_name: str, *, agent: str = "agent", project: bool = False) -> No
Args:
skill_name: Name of the skill to show info for.
agent: Agent identifier for skills (default: agent).
project: If True, only search in project skills. If False, search in both user and project skills.
project: If True, only search in project skills.
If False, search in both user and project skills.
"""
settings = Settings.from_environment()
user_skills_dir = settings.get_user_skills_dir(agent)
@@ -359,7 +363,10 @@ def _info(skill_name: str, *, agent: str = "agent", project: bool = False) -> No
source_color = "green" if skill["source"] == "project" else "cyan"
console.print(
f"\n[bold]Skill: {skill['name']}[/bold] [bold {source_color}]({source_label})[/bold {source_color}]\n",
(
f"\n[bold]Skill: {skill['name']}[/bold] "
f"[bold {source_color}]({source_label})[/bold {source_color}]\n"
),
style=COLORS["primary"],
)
console.print(f"[bold]Description:[/bold] {skill['description']}\n", style=COLORS["dim"])
@@ -382,7 +389,7 @@ def _info(skill_name: str, *, agent: str = "agent", project: bool = False) -> No
def setup_skills_parser(
subparsers: Any,
subparsers: argparse._SubParsersAction[argparse.ArgumentParser],
) -> argparse.ArgumentParser:
"""Setup the skills subcommand parser with all its subcommands."""
skills_parser = subparsers.add_parser(
@@ -394,7 +401,9 @@ def setup_skills_parser(
# Skills list
list_parser = skills_subparsers.add_parser(
"list", help="List all available skills", description="List all available skills"
"list",
help="List all available skills",
description="List all available skills",
)
list_parser.add_argument(
"--agent",
@@ -457,7 +466,10 @@ def execute_skills_command(args: argparse.Namespace) -> None:
if not is_valid:
console.print(f"[bold red]Error:[/bold red] Invalid agent name: {error_msg}")
console.print(
"[dim]Agent names must only contain letters, numbers, hyphens, and underscores.[/dim]",
(
"[dim]Agent names must only contain letters, numbers, hyphens, "
"and underscores.[/dim]"
),
style=COLORS["dim"],
)
return

View File

@@ -1,4 +1,4 @@
"""Skill loader for CLI commands.
"""CLI 커맨드용 스킬 로더(파일시스템 기반)입니다.
This module provides filesystem-based skill loading for CLI operations (list, create, info).
It wraps the prebuilt middleware functionality from deepagents.middleware.skills and adapts
@@ -9,12 +9,15 @@ For middleware usage within agents, use deepagents.middleware.skills.SkillsMiddl
from __future__ import annotations
from pathlib import Path
from typing import TYPE_CHECKING
from deepagents.backends.filesystem import FilesystemBackend
from deepagents.middleware.skills import SkillMetadata
from deepagents.middleware.skills import _list_skills as list_skills_from_backend
if TYPE_CHECKING:
from pathlib import Path
class ExtendedSkillMetadata(SkillMetadata):
"""Extended skill metadata for CLI display, adds source tracking."""

View File

@@ -1,4 +1,7 @@
"""Textual UI adapter for agent execution."""
"""에이전트 실행을 Textual UI에 연결하는 어댑터입니다.
Textual UI adapter for agent execution.
"""
# ruff: noqa: PLR0912, PLR0915, ANN401, PLR2004, BLE001, TRY203
# This module has complex streaming logic ported from execution.py
@@ -498,7 +501,8 @@ async def execute_task_textual(
elif isinstance(decision, dict) and decision.get("type") == "reject":
if tool_msg:
tool_msg.set_rejected()
# Only remove from tracking on reject (approved tools need output update)
# Only remove from tracking on reject
# (approved tools need output update).
if tool_msg_key and tool_msg_key in adapter._current_tool_messages:
del adapter._current_tool_messages[tool_msg_key]

View File

@@ -1,4 +1,10 @@
"""Custom tools for the CLI agent."""
"""CLI 에이전트용 커스텀 도구 모음입니다.
Custom tools for the CLI agent.
"""
# NOTE(KR): 이 파일의 `http_request` / `web_search` / `fetch_url` 함수 docstring은
# LangChain tool description으로 사용될 수 있으므로 번역/수정하지 마세요(영어 유지).
from typing import Any, Literal
@@ -8,6 +14,8 @@ from tavily import TavilyClient
from deepagents_cli.config import settings
_HTTP_ERROR_STATUS_CODE_MIN = 400
# Initialize Tavily client if API key is available
tavily_client = TavilyClient(api_key=settings.tavily_api_key) if settings.has_tavily else None
@@ -34,27 +42,31 @@ def http_request(
Dictionary with response data including status, headers, and content
"""
try:
kwargs = {"url": url, "method": method.upper(), "timeout": timeout}
if headers:
kwargs["headers"] = headers
if params:
kwargs["params"] = params
if data:
json_data: dict[str, Any] | None = None
body_data: str | None = None
if data is not None:
if isinstance(data, dict):
kwargs["json"] = data
json_data = data
else:
kwargs["data"] = data
body_data = data
response = requests.request(**kwargs)
response = requests.request(
method=method.upper(),
url=url,
headers=headers,
params=params,
data=body_data,
json=json_data,
timeout=timeout,
)
try:
content = response.json()
except:
except ValueError:
content = response.text
return {
"success": response.status_code < 400,
"success": response.status_code < _HTTP_ERROR_STATUS_CODE_MIN,
"status_code": response.status_code,
"headers": dict(response.headers),
"content": content,
@@ -77,7 +89,7 @@ def http_request(
"content": f"Request error: {e!s}",
"url": url,
}
except Exception as e:
except Exception as e: # noqa: BLE001
return {
"success": False,
"status_code": 0,
@@ -89,10 +101,11 @@ def http_request(
def web_search(
query: str,
*,
max_results: int = 5,
topic: Literal["general", "news", "finance"] = "general",
include_raw_content: bool = False,
):
) -> dict[str, Any]:
"""Search the web using Tavily for current information and documentation.
This tool searches the web and returns relevant results. After receiving results,
@@ -122,7 +135,9 @@ def web_search(
"""
if tavily_client is None:
return {
"error": "Tavily API key not configured. Please set TAVILY_API_KEY environment variable.",
"error": (
"Tavily API key not configured. Please set TAVILY_API_KEY environment variable."
),
"query": query,
}
@@ -133,7 +148,7 @@ def web_search(
include_raw_content=include_raw_content,
topic=topic,
)
except Exception as e:
except Exception as e: # noqa: BLE001
return {"error": f"Web search error: {e!s}", "query": query}
@@ -174,10 +189,19 @@ def fetch_url(url: str, timeout: int = 30) -> dict[str, Any]:
markdown_content = markdownify(response.text)
return {
"success": True,
"url": str(response.url),
"markdown_content": markdown_content,
"status_code": response.status_code,
"content_length": len(markdown_content),
}
except Exception as e:
return {"error": f"Fetch URL error: {e!s}", "url": url}
except requests.exceptions.Timeout:
return {
"success": False,
"error": f"Fetch URL timed out after {timeout} seconds",
"url": url,
}
except requests.exceptions.RequestException as e:
return {"success": False, "error": f"Fetch URL request error: {e!s}", "url": url}
except Exception as e: # noqa: BLE001
return {"success": False, "error": f"Fetch URL error: {e!s}", "url": url}

View File

@@ -1,8 +1,11 @@
"""UI rendering and display utilities for the CLI."""
"""CLI UI 렌더링/표시 관련 유틸리티입니다.
UI rendering and display utilities for the CLI.
"""
import json
from contextlib import suppress
from pathlib import Path
from typing import Any
from .config import COLORS, DEEP_AGENTS_ASCII, MAX_ARG_LENGTH, console
@@ -14,7 +17,7 @@ def truncate_value(value: str, max_length: int = MAX_ARG_LENGTH) -> str:
return value
def format_tool_display(tool_name: str, tool_args: dict) -> str:
def format_tool_display(tool_name: str, tool_args: dict) -> str: # noqa: PLR0911, PLR0912, PLR0915
"""Format tool calls for display with tool-specific smart formatting.
Shows the most relevant information for each tool type rather than all arguments.
@@ -34,32 +37,32 @@ def format_tool_display(tool_name: str, tool_args: dict) -> str:
def abbreviate_path(path_str: str, max_length: int = 60) -> str:
"""Abbreviate a file path intelligently - show basename or relative path."""
path = Path(path_str)
# If it's just a filename (no directory parts), return as-is
if len(path.parts) == 1:
return path_str
# Try to get relative path from current working directory
try:
path = Path(path_str)
cwd = Path.cwd()
except OSError:
cwd = None
# If it's just a filename (no directory parts), return as-is
if len(path.parts) == 1:
return path_str
# Try to get relative path from current working directory
try:
rel_path = path.relative_to(Path.cwd())
if cwd is not None:
with suppress(ValueError):
rel_path = path.relative_to(cwd)
rel_str = str(rel_path)
# Use relative if it's shorter and not too long
if len(rel_str) < len(path_str) and len(rel_str) <= max_length:
return rel_str
except (ValueError, Exception):
pass
# If absolute path is reasonable length, use it
if len(path_str) <= max_length:
return path_str
# If absolute path is reasonable length, use it
if len(path_str) <= max_length:
return path_str
# Otherwise, just show basename (filename only)
return path.name
except Exception:
# Fallback to original string if any error
return truncate_value(path_str, max_length)
# Otherwise, just show basename (filename only)
return path.name
# Tool-specific formatting - show the most important argument(s)
if tool_name in ("read_file", "write_file", "edit_file"):
@@ -144,7 +147,7 @@ def format_tool_display(tool_name: str, tool_args: dict) -> str:
return f"{tool_name}({args_str})"
def format_tool_message_content(content: Any) -> str:
def format_tool_message_content(content: object) -> str:
"""Convert ToolMessage content into a printable string."""
if content is None:
return ""
@@ -156,7 +159,7 @@ def format_tool_message_content(content: Any) -> str:
else:
try:
parts.append(json.dumps(item))
except Exception:
except (TypeError, ValueError):
parts.append(str(item))
return "\n".join(parts)
return str(content)

View File

@@ -1,4 +1,7 @@
"""Textual widgets for deepagents-cli."""
"""deepagents-cli에서 사용하는 Textual 위젯 모음입니다.
Textual widgets for deepagents-cli.
"""
from __future__ import annotations

View File

@@ -1,12 +1,12 @@
"""Approval widget for HITL - using standard Textual patterns."""
"""HITL(승인)용 Approval 위젯입니다(Textual 표준 패턴 기반).
Approval widget for HITL - using standard Textual patterns.
"""
from __future__ import annotations
import asyncio
from typing import Any, ClassVar
from typing import TYPE_CHECKING, Any, ClassVar
from textual import events
from textual.app import ComposeResult
from textual.binding import Binding, BindingType
from textual.containers import Container, Vertical, VerticalScroll
from textual.message import Message
@@ -14,6 +14,12 @@ from textual.widgets import Static
from deepagents_cli.widgets.tool_renderers import get_renderer
if TYPE_CHECKING:
import asyncio
from textual import events
from textual.app import ComposeResult
class ApprovalMenu(Container):
"""Approval menu using standard Textual patterns.
@@ -50,6 +56,7 @@ class ApprovalMenu(Container):
"""Message sent when user makes a decision."""
def __init__(self, decision: dict[str, str]) -> None:
"""Create the message with the selected decision payload."""
super().__init__()
self.decision = decision
@@ -60,6 +67,7 @@ class ApprovalMenu(Container):
id: str | None = None, # noqa: A002
**kwargs: Any,
) -> None:
"""Create the approval menu widget for a single action request."""
super().__init__(id=id or "approval-menu", classes="approval-menu", **kwargs)
self._action_request = action_request
self._assistant_id = assistant_id
@@ -90,7 +98,7 @@ class ApprovalMenu(Container):
# Options container FIRST - always visible at top
with Container(classes="approval-options-container"):
# Options - create 3 Static widgets
for i in range(3):
for _ in range(3):
widget = Static("", classes="approval-option")
self._option_widgets.append(widget)
yield widget
@@ -138,7 +146,7 @@ class ApprovalMenu(Container):
]
for i, (text, widget) in enumerate(zip(options, self._option_widgets, strict=True)):
cursor = " " if i == self._selected else " "
cursor = "> " if i == self._selected else " "
widget.update(f"{cursor}{text}")
# Update classes
@@ -194,6 +202,6 @@ class ApprovalMenu(Container):
# Post message
self.post_message(self.Decided(decision))
def on_blur(self, event: events.Blur) -> None:
def on_blur(self, _event: events.Blur) -> None:
"""Re-focus on blur to keep focus trapped."""
self.call_after_refresh(self.focus)

View File

@@ -1,4 +1,4 @@
"""Autocomplete system for @ mentions and / commands.
"""@ 멘션 및 / 커맨드용 자동완성(Autocomplete) 시스템입니다.
This is a custom implementation that handles trigger-based completion
for slash commands (/) and file mentions (@).

View File

@@ -1,4 +1,7 @@
"""Chat input widget for deepagents-cli with autocomplete and history support."""
"""자동완성/히스토리를 지원하는 deepagents-cli 채팅 입력 위젯입니다.
Chat input widget for deepagents-cli with autocomplete and history support.
"""
from __future__ import annotations

View File

@@ -1,4 +1,7 @@
"""Enhanced diff widget for displaying unified diffs."""
"""unified diff를 표시하기 위한 확장(diff) 위젯입니다.
Enhanced diff widget for displaying unified diffs.
"""
from __future__ import annotations
@@ -25,7 +28,7 @@ def _escape_markup(text: str) -> str:
return text.replace("[", r"\[").replace("]", r"\]")
def format_diff_textual(diff: str, max_lines: int | None = 100) -> str:
def format_diff_textual(diff: str, max_lines: int | None = 100) -> str: # noqa: PLR0912
"""Format a unified diff with line numbers and colors.
Args:

View File

@@ -1,4 +1,4 @@
"""Command history manager for input persistence."""
"""입력 히스토리를 파일로 지속(persist)하기 위한 커맨드 히스토리 관리자입니다."""
from __future__ import annotations
@@ -7,14 +7,14 @@ from pathlib import Path # noqa: TC003 - used at runtime in type hints
class HistoryManager:
"""Manages command history with file persistence.
"""파일 지속을 포함한 커맨드 히스토리를 관리합니다.
Uses append-only writes for concurrent safety. Multiple agents can
safely write to the same history file without corruption.
"""
def __init__(self, history_file: Path, max_entries: int = 100) -> None:
"""Initialize the history manager.
"""히스토리 관리자를 초기화합니다.
Args:
history_file: Path to the JSON-lines history file
@@ -28,7 +28,7 @@ class HistoryManager:
self._load_history()
def _load_history(self) -> None:
"""Load history from file."""
"""파일에서 히스토리를 로드합니다."""
if not self.history_file.exists():
return
@@ -49,7 +49,7 @@ class HistoryManager:
self._entries = []
def _append_to_file(self, text: str) -> None:
"""Append a single entry to history file (concurrent-safe)."""
"""히스토리 파일에 항목 하나를 append 합니다(concurrent-safe)."""
try:
self.history_file.parent.mkdir(parents=True, exist_ok=True)
with self.history_file.open("a", encoding="utf-8") as f:
@@ -58,7 +58,7 @@ class HistoryManager:
pass
def _compact_history(self) -> None:
"""Rewrite history file to remove old entries.
"""오래된 항목을 제거하기 위해 히스토리 파일을 재작성합니다.
Only called when entries exceed 2x max_entries to minimize rewrites.
"""
@@ -71,26 +71,26 @@ class HistoryManager:
pass
def add(self, text: str) -> None:
"""Add a command to history.
"""커맨드를 히스토리에 추가합니다.
Args:
text: The command text to add
"""
text = text.strip()
# Skip empty or slash commands
# 빈 문자열 또는 slash 커맨드는 스킵
if not text or text.startswith("/"):
return
# Skip duplicates of the last entry
# 직전 항목과 중복이면 스킵
if self._entries and self._entries[-1] == text:
return
self._entries.append(text)
# Append to file (fast, concurrent-safe)
# 파일에 append(빠르고 concurrent-safe)
self._append_to_file(text)
# Compact only when we have 2x max entries (rare operation)
# 엔트리가 2배를 초과할 때만 compact(드문 작업)
if len(self._entries) > self.max_entries * 2:
self._entries = self._entries[-self.max_entries :]
self._compact_history()
@@ -98,7 +98,7 @@ class HistoryManager:
self.reset_navigation()
def get_previous(self, current_input: str, prefix: str = "") -> str | None:
"""Get the previous history entry.
"""이전 히스토리 항목을 가져옵니다.
Args:
current_input: Current input text (saved on first navigation)
@@ -110,12 +110,12 @@ class HistoryManager:
if not self._entries:
return None
# Save current input on first navigation
# 첫 네비게이션 시 현재 입력을 저장
if self._current_index == -1:
self._temp_input = current_input
self._current_index = len(self._entries)
# Search backwards for matching entry
# 뒤로 탐색하며 prefix에 매칭되는 항목을 찾음
for i in range(self._current_index - 1, -1, -1):
if self._entries[i].startswith(prefix):
self._current_index = i
@@ -124,7 +124,7 @@ class HistoryManager:
return None
def get_next(self, prefix: str = "") -> str | None:
"""Get the next history entry.
"""다음 히스토리 항목을 가져옵니다.
Args:
prefix: Optional prefix to filter entries
@@ -135,18 +135,18 @@ class HistoryManager:
if self._current_index == -1:
return None
# Search forwards for matching entry
# 앞으로 탐색하며 prefix에 매칭되는 항목을 찾음
for i in range(self._current_index + 1, len(self._entries)):
if self._entries[i].startswith(prefix):
self._current_index = i
return self._entries[i]
# Return to original input at the end
# 끝까지 가면 원래 입력으로 복귀
result = self._temp_input
self.reset_navigation()
return result
def reset_navigation(self) -> None:
"""Reset navigation state."""
"""네비게이션 상태를 초기화합니다."""
self._current_index = -1
self._temp_input = ""

View File

@@ -1,4 +1,7 @@
"""Loading widget with animated spinner for agent activity."""
"""에이전트 동작 중 표시하는 로딩(스피너) 위젯입니다.
Loading widget with animated spinner for agent activity.
"""
from __future__ import annotations

View File

@@ -1,4 +1,7 @@
"""Message widgets for deepagents-cli."""
"""deepagents-cli에서 메시지(대화/툴 호출 등)를 표시하는 위젯 모음입니다.
Message widgets for deepagents-cli.
"""
from __future__ import annotations
@@ -7,13 +10,13 @@ from typing import TYPE_CHECKING, Any
from textual.containers import Vertical
from textual.css.query import NoMatches
from textual.widgets import Markdown, Static
from textual.widgets._markdown import MarkdownStream
from deepagents_cli.ui import format_tool_display
from deepagents_cli.widgets.diff import format_diff_textual
if TYPE_CHECKING:
from textual.app import ComposeResult
from textual.widgets._markdown import MarkdownStream
# Maximum number of tool arguments to display inline
_MAX_INLINE_ARGS = 3

View File

@@ -1,4 +1,4 @@
"""Status bar widget for deepagents-cli."""
"""deepagents-cli용 상태 표시줄(Status bar) 위젯입니다."""
from __future__ import annotations
@@ -13,9 +13,11 @@ from textual.widgets import Static
if TYPE_CHECKING:
from textual.app import ComposeResult
TOKENS_K_THRESHOLD = 1000
class StatusBar(Horizontal):
"""Status bar showing mode, auto-approve status, and working directory."""
"""모드/자동 승인/작업 디렉토리 등을 표시하는 상태 표시줄입니다."""
DEFAULT_CSS = """
StatusBar {
@@ -90,18 +92,18 @@ class StatusBar(Horizontal):
tokens: reactive[int] = reactive(0, init=False)
def __init__(self, cwd: str | Path | None = None, **kwargs: Any) -> None:
"""Initialize the status bar.
"""상태 표시줄을 초기화합니다.
Args:
cwd: Current working directory to display
**kwargs: Additional arguments passed to parent
"""
super().__init__(**kwargs)
# Store initial cwd - will be used in compose()
# 초기 cwd를 저장(compose()에서 사용)
self._initial_cwd = str(cwd) if cwd else str(Path.cwd())
def compose(self) -> ComposeResult:
"""Compose the status bar layout."""
"""상태 표시줄 레이아웃을 구성합니다."""
yield Static("", classes="status-mode normal", id="mode-indicator")
yield Static(
"manual | shift+tab to cycle",
@@ -113,11 +115,11 @@ class StatusBar(Horizontal):
# CWD shown in welcome banner, not pinned in status bar
def on_mount(self) -> None:
"""Set reactive values after mount to trigger watchers safely."""
"""마운트(on_mount) 이후 reactive 값을 설정해 watcher가 안전하게 동작하도록 합니다."""
self.cwd = self._initial_cwd
def watch_mode(self, mode: str) -> None:
"""Update mode indicator when mode changes."""
"""모드(mode) 변경 시 표시를 갱신합니다."""
try:
indicator = self.query_one("#mode-indicator", Static)
except NoMatches:
@@ -135,7 +137,7 @@ class StatusBar(Horizontal):
indicator.add_class("normal")
def watch_auto_approve(self, new_value: bool) -> None: # noqa: FBT001
"""Update auto-approve indicator when state changes."""
"""auto-approve 상태 변경 시 표시를 갱신합니다."""
try:
indicator = self.query_one("#auto-approve-indicator", Static)
except NoMatches:
@@ -150,7 +152,7 @@ class StatusBar(Horizontal):
indicator.add_class("off")
def watch_cwd(self, new_value: str) -> None:
"""Update cwd display when it changes."""
"""작업 디렉토리(cwd) 변경 시 표시를 갱신합니다."""
try:
display = self.query_one("#cwd-display", Static)
except NoMatches:
@@ -158,7 +160,7 @@ class StatusBar(Horizontal):
display.update(self._format_cwd(new_value))
def watch_status_message(self, new_value: str) -> None:
"""Update status message display."""
"""상태 메시지(status message) 변경 시 표시를 갱신합니다."""
try:
msg_widget = self.query_one("#status-message", Static)
except NoMatches:
@@ -173,10 +175,10 @@ class StatusBar(Horizontal):
msg_widget.update("")
def _format_cwd(self, cwd_path: str = "") -> str:
"""Format the current working directory for display."""
"""표시용으로 현재 작업 디렉토리를 포맷팅합니다."""
path = Path(cwd_path or self.cwd or self._initial_cwd)
try:
# Try to use ~ for home directory
# 홈 디렉토리는 ~로 표시 시도
home = Path.home()
if path.is_relative_to(home):
return "~/" + str(path.relative_to(home))
@@ -185,7 +187,7 @@ class StatusBar(Horizontal):
return str(path)
def set_mode(self, mode: str) -> None:
"""Set the current input mode.
"""현재 입력 모드를 설정합니다.
Args:
mode: One of "normal", "bash", or "command"
@@ -193,7 +195,7 @@ class StatusBar(Horizontal):
self.mode = mode
def set_auto_approve(self, *, enabled: bool) -> None:
"""Set the auto-approve state.
"""auto-approve 상태를 설정합니다.
Args:
enabled: Whether auto-approve is enabled
@@ -201,7 +203,7 @@ class StatusBar(Horizontal):
self.auto_approve = enabled
def set_status_message(self, message: str) -> None:
"""Set the status message.
"""상태 메시지를 설정합니다.
Args:
message: Status message to display (empty string to clear)
@@ -209,23 +211,23 @@ class StatusBar(Horizontal):
self.status_message = message
def watch_tokens(self, new_value: int) -> None:
"""Update token display when count changes."""
"""토큰 수 변경 시 표시를 갱신합니다."""
try:
display = self.query_one("#tokens-display", Static)
except NoMatches:
return
if new_value > 0:
# Format with K suffix for thousands
if new_value >= 1000:
display.update(f"{new_value / 1000:.1f}K tokens")
# 천 단위는 K suffix로 표시
if new_value >= TOKENS_K_THRESHOLD:
display.update(f"{new_value / TOKENS_K_THRESHOLD:.1f}K tokens")
else:
display.update(f"{new_value} tokens")
else:
display.update("")
def set_tokens(self, count: int) -> None:
"""Set the token count.
"""토큰 수를 설정합니다.
Args:
count: Current context token count

View File

@@ -1,4 +1,4 @@
"""Tool renderers for approval widgets - registry pattern."""
"""승인(approval) 위젯용 tool renderer들(레지스트리 패턴)입니다."""
from __future__ import annotations
@@ -15,14 +15,16 @@ from deepagents_cli.widgets.tool_widgets import (
if TYPE_CHECKING:
from deepagents_cli.widgets.tool_widgets import ToolApprovalWidget
DIFF_HEADER_LINES = 2
class ToolRenderer:
"""Base renderer for tool approval widgets."""
"""tool 승인 위젯 렌더러의 베이스 클래스입니다."""
def get_approval_widget(
self, tool_args: dict[str, Any]
) -> tuple[type[ToolApprovalWidget], dict[str, Any]]:
"""Get the approval widget class and data for this tool.
"""이 tool에 대한 승인 위젯 클래스와 데이터를 반환합니다.
Args:
tool_args: The tool arguments from action_request
@@ -34,16 +36,17 @@ class ToolRenderer:
class WriteFileRenderer(ToolRenderer):
"""Renderer for write_file tool - shows full file content."""
"""`write_file` tool 렌더러(전체 파일 내용을 표시)."""
def get_approval_widget(
self, tool_args: dict[str, Any]
) -> tuple[type[ToolApprovalWidget], dict[str, Any]]:
# Extract file extension for syntax highlighting
"""`write_file` 요청을 표시할 승인 위젯과 데이터를 구성합니다."""
# 문법 하이라이팅을 위해 확장자를 추출
file_path = tool_args.get("file_path", "")
content = tool_args.get("content", "")
# Get file extension
# 파일 확장자
file_extension = "text"
if "." in file_path:
file_extension = file_path.rsplit(".", 1)[-1]
@@ -57,16 +60,17 @@ class WriteFileRenderer(ToolRenderer):
class EditFileRenderer(ToolRenderer):
"""Renderer for edit_file tool - shows unified diff."""
"""`edit_file` tool 렌더러(unified diff 표시)."""
def get_approval_widget(
self, tool_args: dict[str, Any]
) -> tuple[type[ToolApprovalWidget], dict[str, Any]]:
"""`edit_file` 요청을 unified diff 형태로 표시할 승인 위젯/데이터를 구성합니다."""
file_path = tool_args.get("file_path", "")
old_string = tool_args.get("old_string", "")
new_string = tool_args.get("new_string", "")
# Generate unified diff
# unified diff 생성
diff_lines = self._generate_diff(old_string, new_string)
data = {
@@ -78,14 +82,14 @@ class EditFileRenderer(ToolRenderer):
return EditFileApprovalWidget, data
def _generate_diff(self, old_string: str, new_string: str) -> list[str]:
"""Generate unified diff lines from old and new strings."""
"""old/new 문자열로부터 unified diff 라인을 생성합니다."""
if not old_string and not new_string:
return []
old_lines = old_string.split("\n") if old_string else []
new_lines = new_string.split("\n") if new_string else []
# Generate unified diff
# unified diff 생성
diff = difflib.unified_diff(
old_lines,
new_lines,
@@ -95,17 +99,18 @@ class EditFileRenderer(ToolRenderer):
n=3, # Context lines
)
# Skip the first two header lines (--- and +++)
# 헤더 라인(---, +++)은 제외
diff_list = list(diff)
return diff_list[2:] if len(diff_list) > 2 else diff_list
return diff_list[DIFF_HEADER_LINES:] if len(diff_list) > DIFF_HEADER_LINES else diff_list
class BashRenderer(ToolRenderer):
"""Renderer for bash/shell tool - shows command."""
"""`bash`/`shell` tool 렌더러(커맨드 표시)."""
def get_approval_widget(
self, tool_args: dict[str, Any]
) -> tuple[type[ToolApprovalWidget], dict[str, Any]]:
"""`bash`/`shell` 요청을 표시할 승인 위젯/데이터를 구성합니다."""
data = {
"command": tool_args.get("command", ""),
"description": tool_args.get("description", ""),
@@ -113,7 +118,7 @@ class BashRenderer(ToolRenderer):
return BashApprovalWidget, data
# Registry mapping tool names to renderers
# tool 이름 → renderer 매핑 레지스트리
_RENDERER_REGISTRY: dict[str, type[ToolRenderer]] = {
"write_file": WriteFileRenderer,
"edit_file": EditFileRenderer,
@@ -123,7 +128,7 @@ _RENDERER_REGISTRY: dict[str, type[ToolRenderer]] = {
def get_renderer(tool_name: str) -> ToolRenderer:
"""Get the renderer for a tool by name.
"""도구 이름에 맞는 renderer를 반환합니다.
Args:
tool_name: The name of the tool

View File

@@ -1,4 +1,7 @@
"""Tool-specific approval widgets for HITL display."""
"""HITL(승인) 화면에서 도구별(tool-specific) 표시를 담당하는 위젯들입니다.
Tool-specific approval widgets for HITL display.
"""
from __future__ import annotations

View File

@@ -1,4 +1,7 @@
"""Welcome banner widget for deepagents-cli."""
"""deepagents-cli 시작 시 보여주는 환영 배너 위젯입니다.
Welcome banner widget for deepagents-cli.
"""
from __future__ import annotations

View File

@@ -6,6 +6,10 @@ Searches the arXiv preprint repository for research papers.
import argparse
from rich.console import Console
console = Console()
def query_arxiv(query: str, max_papers: int = 10) -> str:
"""Query arXiv for papers based on the provided search query.
@@ -33,12 +37,16 @@ def query_arxiv(query: str, max_papers: int = 10) -> str:
results = "\n\n".join(
[f"Title: {paper.title}\nSummary: {paper.summary}" for paper in client.results(search)]
)
return results if results else "No papers found on arXiv."
except Exception as e:
except Exception as e: # noqa: BLE001
return f"Error querying arXiv: {e}"
else:
if results:
return results
return "No papers found on arXiv."
def main() -> None:
"""Run the arXiv search CLI."""
parser = argparse.ArgumentParser(description="Search arXiv for research papers")
parser.add_argument("query", type=str, help="Search query string")
parser.add_argument(
@@ -50,7 +58,7 @@ def main() -> None:
args = parser.parse_args()
query_arxiv(args.query, max_papers=args.max_papers)
console.print(query_arxiv(args.query, max_papers=args.max_papers))
if __name__ == "__main__":

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env python3
"""
Skill Initializer - Creates a new skill from template
"""Skill Initializer - Creates a new skill from template.
Usage:
init_skill.py <skill-name> --path <path>
@@ -14,9 +13,16 @@ For deepagents CLI:
init_skill.py my-skill --path ~/.deepagents/agent/skills
"""
# ruff: noqa: E501
import sys
from pathlib import Path
from rich.console import Console
console = Console()
MIN_ARGS = 4
SKILL_TEMPLATE = """---
name: {skill_name}
@@ -189,14 +195,13 @@ Note: This is a text placeholder. Actual assets can be any file type.
"""
def title_case_skill_name(skill_name):
def title_case_skill_name(skill_name: str) -> str:
"""Convert hyphenated skill name to Title Case for display."""
return ' '.join(word.capitalize() for word in skill_name.split('-'))
return " ".join(word.capitalize() for word in skill_name.split("-"))
def init_skill(skill_name, path):
"""
Initialize a new skill directory with template SKILL.md.
def init_skill(skill_name: str, path: str | Path) -> Path | None:
"""Initialize a new skill directory with template SKILL.md.
Args:
skill_name: Name of the skill
@@ -210,15 +215,15 @@ def init_skill(skill_name, path):
# Check if directory already exists
if skill_dir.exists():
print(f"❌ Error: Skill directory already exists: {skill_dir}")
console.print(f"❌ Error: Skill directory already exists: {skill_dir}")
return None
# Create skill directory
try:
skill_dir.mkdir(parents=True, exist_ok=False)
print(f"✅ Created skill directory: {skill_dir}")
except Exception as e:
print(f"❌ Error creating directory: {e}")
console.print(f"✅ Created skill directory: {skill_dir}")
except OSError as e:
console.print(f"❌ Error creating directory: {e}")
return None
# Create SKILL.md from template
@@ -228,73 +233,76 @@ def init_skill(skill_name, path):
skill_title=skill_title
)
skill_md_path = skill_dir / 'SKILL.md'
skill_md_path = skill_dir / "SKILL.md"
try:
skill_md_path.write_text(skill_content)
print("✅ Created SKILL.md")
except Exception as e:
print(f"❌ Error creating SKILL.md: {e}")
console.print("✅ Created SKILL.md")
except OSError as e:
console.print(f"❌ Error creating SKILL.md: {e}")
return None
# Create resource directories with example files
try:
# Create scripts/ directory with example script
scripts_dir = skill_dir / 'scripts'
scripts_dir = skill_dir / "scripts"
scripts_dir.mkdir(exist_ok=True)
example_script = scripts_dir / 'example.py'
example_script = scripts_dir / "example.py"
example_script.write_text(EXAMPLE_SCRIPT.format(skill_name=skill_name))
example_script.chmod(0o755)
print("✅ Created scripts/example.py")
console.print("✅ Created scripts/example.py")
# Create references/ directory with example reference doc
references_dir = skill_dir / 'references'
references_dir = skill_dir / "references"
references_dir.mkdir(exist_ok=True)
example_reference = references_dir / 'api_reference.md'
example_reference = references_dir / "api_reference.md"
example_reference.write_text(EXAMPLE_REFERENCE.format(skill_title=skill_title))
print("✅ Created references/api_reference.md")
console.print("✅ Created references/api_reference.md")
# Create assets/ directory with example asset placeholder
assets_dir = skill_dir / 'assets'
assets_dir = skill_dir / "assets"
assets_dir.mkdir(exist_ok=True)
example_asset = assets_dir / 'example_asset.txt'
example_asset = assets_dir / "example_asset.txt"
example_asset.write_text(EXAMPLE_ASSET)
print("✅ Created assets/example_asset.txt")
except Exception as e:
print(f"❌ Error creating resource directories: {e}")
console.print("✅ Created assets/example_asset.txt")
except OSError as e:
console.print(f"❌ Error creating resource directories: {e}")
return None
# Print next steps
print(f"\n✅ Skill '{skill_name}' initialized successfully at {skill_dir}")
print("\nNext steps:")
print("1. Edit SKILL.md to complete the TODO items and update the description")
print("2. Customize or delete the example files in scripts/, references/, and assets/")
print("3. Run the validator when ready to check the skill structure")
console.print(f"\n✅ Skill '{skill_name}' initialized successfully at {skill_dir}")
console.print("\nNext steps:")
console.print("1. Edit SKILL.md to complete the TODO items and update the description")
console.print(
"2. Customize or delete the example files in scripts/, references/, and assets/"
)
console.print("3. Run the validator when ready to check the skill structure")
return skill_dir
def main():
if len(sys.argv) < 4 or sys.argv[2] != '--path':
print("Usage: init_skill.py <skill-name> --path <path>")
print("\nSkill name requirements:")
print(" - Hyphen-case identifier (e.g., 'data-analyzer')")
print(" - Lowercase letters, digits, and hyphens only")
print(" - Max 40 characters")
print(" - Must match directory name exactly")
print("\nExamples:")
print(" init_skill.py my-new-skill --path skills/public")
print(" init_skill.py my-api-helper --path skills/private")
print(" init_skill.py custom-skill --path /custom/location")
print("\nFor deepagents CLI:")
print(" init_skill.py my-skill --path ~/.deepagents/agent/skills")
def main() -> None:
"""Run the skill initializer CLI."""
if len(sys.argv) < MIN_ARGS or sys.argv[2] != "--path":
console.print("Usage: init_skill.py <skill-name> --path <path>")
console.print("\nSkill name requirements:")
console.print(" - Hyphen-case identifier (e.g., 'data-analyzer')")
console.print(" - Lowercase letters, digits, and hyphens only")
console.print(" - Max 40 characters")
console.print(" - Must match directory name exactly")
console.print("\nExamples:")
console.print(" init_skill.py my-new-skill --path skills/public")
console.print(" init_skill.py my-api-helper --path skills/private")
console.print(" init_skill.py custom-skill --path /custom/location")
console.print("\nFor deepagents CLI:")
console.print(" init_skill.py my-skill --path ~/.deepagents/agent/skills")
sys.exit(1)
skill_name = sys.argv[1]
path = sys.argv[3]
print(f"🚀 Initializing skill: {skill_name}")
print(f" Location: {path}")
print()
console.print(f"🚀 Initializing skill: {skill_name}")
console.print(f" Location: {path}")
console.print()
result = init_skill(skill_name, path)

View File

@@ -1,6 +1,5 @@
#!/usr/bin/env python3
"""
Quick validation script for skills - minimal version
"""Quick validation script for skills - minimal version.
For deepagents CLI, skills are located at:
~/.deepagents/<agent>/skills/<skill-name>/
@@ -9,28 +8,36 @@ Example:
python quick_validate.py ~/.deepagents/agent/skills/my-skill
"""
import sys
import os
import re
import yaml
import sys
from pathlib import Path
def validate_skill(skill_path):
"""Basic validation of a skill"""
import yaml
from rich.console import Console
console = Console()
MAX_NAME_LENGTH = 64
MAX_DESCRIPTION_LENGTH = 1024
EXPECTED_ARG_COUNT = 2
def validate_skill(skill_path: str | Path) -> tuple[bool, str]: # noqa: PLR0911, PLR0912
"""Basic validation of a skill."""
skill_path = Path(skill_path)
# Check SKILL.md exists
skill_md = skill_path / 'SKILL.md'
skill_md = skill_path / "SKILL.md"
if not skill_md.exists():
return False, "SKILL.md not found"
# Read and validate frontmatter
content = skill_md.read_text()
if not content.startswith('---'):
if not content.startswith("---"):
return False, "No YAML frontmatter found"
# Extract frontmatter
match = re.match(r'^---\n(.*?)\n---', content, re.DOTALL)
match = re.match(r"^---\n(.*?)\n---", content, re.DOTALL)
if not match:
return False, "Invalid frontmatter format"
@@ -45,57 +52,67 @@ def validate_skill(skill_path):
return False, f"Invalid YAML in frontmatter: {e}"
# Define allowed properties
ALLOWED_PROPERTIES = {'name', 'description', 'license', 'allowed-tools', 'metadata'}
allowed_properties = {"name", "description", "license", "allowed-tools", "metadata"}
# Check for unexpected properties (excluding nested keys under metadata)
unexpected_keys = set(frontmatter.keys()) - ALLOWED_PROPERTIES
unexpected_keys = set(frontmatter.keys()) - allowed_properties
if unexpected_keys:
return False, (
f"Unexpected key(s) in SKILL.md frontmatter: {', '.join(sorted(unexpected_keys))}. "
f"Allowed properties are: {', '.join(sorted(ALLOWED_PROPERTIES))}"
f"Allowed properties are: {', '.join(sorted(allowed_properties))}"
)
# Check required fields
if 'name' not in frontmatter:
if "name" not in frontmatter:
return False, "Missing 'name' in frontmatter"
if 'description' not in frontmatter:
if "description" not in frontmatter:
return False, "Missing 'description' in frontmatter"
# Extract name for validation
name = frontmatter.get('name', '')
name = frontmatter.get("name", "")
if not isinstance(name, str):
return False, f"Name must be a string, got {type(name).__name__}"
name = name.strip()
if name:
# Check naming convention (hyphen-case: lowercase with hyphens)
if not re.match(r'^[a-z0-9-]+$', name):
return False, f"Name '{name}' should be hyphen-case (lowercase letters, digits, and hyphens only)"
if name.startswith('-') or name.endswith('-') or '--' in name:
return False, f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens"
if not re.match(r"^[a-z0-9-]+$", name):
return False, (
f"Name '{name}' should be hyphen-case (lowercase letters, digits, and hyphens only)"
)
if name.startswith("-") or name.endswith("-") or "--" in name:
return False, (
f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens"
)
# Check name length (max 64 characters per spec)
if len(name) > 64:
return False, f"Name is too long ({len(name)} characters). Maximum is 64 characters."
if len(name) > MAX_NAME_LENGTH:
return False, (
f"Name is too long ({len(name)} characters). "
f"Maximum is {MAX_NAME_LENGTH} characters."
)
# Extract and validate description
description = frontmatter.get('description', '')
description = frontmatter.get("description", "")
if not isinstance(description, str):
return False, f"Description must be a string, got {type(description).__name__}"
description = description.strip()
if description:
# Check for angle brackets
if '<' in description or '>' in description:
if "<" in description or ">" in description:
return False, "Description cannot contain angle brackets (< or >)"
# Check description length (max 1024 characters per spec)
if len(description) > 1024:
return False, f"Description is too long ({len(description)} characters). Maximum is 1024 characters."
if len(description) > MAX_DESCRIPTION_LENGTH:
return False, (
"Description is too long "
f"({len(description)} characters). Maximum is {MAX_DESCRIPTION_LENGTH} characters."
)
return True, "Skill is valid!"
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: python quick_validate.py <skill_directory>")
if len(sys.argv) != EXPECTED_ARG_COUNT:
console.print("Usage: python quick_validate.py <skill_directory>")
sys.exit(1)
valid, message = validate_skill(sys.argv[1])
print(message)
console.print(message)
sys.exit(0 if valid else 1)

View File

@@ -16,6 +16,8 @@ from deepagents.backends.sandbox import BaseSandbox
from deepagents_cli.integrations.sandbox_factory import create_sandbox
_SANDBOX_TMP_DIR = "/tmp" # noqa: S108
class BaseSandboxIntegrationTest(ABC):
"""Base class for sandbox integration tests.
@@ -38,7 +40,7 @@ class BaseSandboxIntegrationTest(ABC):
def test_upload_single_file(self, sandbox: SandboxBackendProtocol) -> None:
"""Test uploading a single file."""
test_path = "/tmp/test_upload_single.txt"
test_path = f"{_SANDBOX_TMP_DIR}/test_upload_single.txt"
test_content = b"Hello, Sandbox!"
upload_responses = sandbox.upload_files([(test_path, test_content)])
@@ -52,7 +54,7 @@ class BaseSandboxIntegrationTest(ABC):
def test_download_single_file(self, sandbox: SandboxBackendProtocol) -> None:
"""Test downloading a single file."""
test_path = "/tmp/test_download_single.txt"
test_path = f"{_SANDBOX_TMP_DIR}/test_download_single.txt"
test_content = b"Download test content"
# Create file first
sandbox.upload_files([(test_path, test_content)])
@@ -67,7 +69,7 @@ class BaseSandboxIntegrationTest(ABC):
def test_upload_download_roundtrip(self, sandbox: SandboxBackendProtocol) -> None:
"""Test upload followed by download for data integrity."""
test_path = "/tmp/test_roundtrip.txt"
test_path = f"{_SANDBOX_TMP_DIR}/test_roundtrip.txt"
test_content = b"Roundtrip test: special chars \n\t\r\x00"
# Upload
@@ -82,9 +84,9 @@ class BaseSandboxIntegrationTest(ABC):
def test_upload_multiple_files(self, sandbox: SandboxBackendProtocol) -> None:
"""Test uploading multiple files in a single batch."""
files = [
("/tmp/test_multi_1.txt", b"Content 1"),
("/tmp/test_multi_2.txt", b"Content 2"),
("/tmp/test_multi_3.txt", b"Content 3"),
(f"{_SANDBOX_TMP_DIR}/test_multi_1.txt", b"Content 1"),
(f"{_SANDBOX_TMP_DIR}/test_multi_2.txt", b"Content 2"),
(f"{_SANDBOX_TMP_DIR}/test_multi_3.txt", b"Content 3"),
]
upload_responses = sandbox.upload_files(files)
@@ -97,9 +99,9 @@ class BaseSandboxIntegrationTest(ABC):
def test_download_multiple_files(self, sandbox: SandboxBackendProtocol) -> None:
"""Test downloading multiple files in a single batch."""
files = [
("/tmp/test_batch_1.txt", b"Batch 1"),
("/tmp/test_batch_2.txt", b"Batch 2"),
("/tmp/test_batch_3.txt", b"Batch 3"),
(f"{_SANDBOX_TMP_DIR}/test_batch_1.txt", b"Batch 1"),
(f"{_SANDBOX_TMP_DIR}/test_batch_2.txt", b"Batch 2"),
(f"{_SANDBOX_TMP_DIR}/test_batch_3.txt", b"Batch 3"),
]
# Upload files first
@@ -118,7 +120,7 @@ class BaseSandboxIntegrationTest(ABC):
@pytest.mark.skip(reason="Error handling not implemented yet.")
def test_download_nonexistent_file(self, sandbox: SandboxBackendProtocol) -> None:
"""Test that downloading a non-existent file returns an error."""
nonexistent_path = "/tmp/does_not_exist.txt"
nonexistent_path = f"{_SANDBOX_TMP_DIR}/does_not_exist.txt"
download_responses = sandbox.download_files([nonexistent_path])
@@ -129,7 +131,7 @@ class BaseSandboxIntegrationTest(ABC):
def test_upload_binary_content(self, sandbox: SandboxBackendProtocol) -> None:
"""Test uploading binary content (not valid UTF-8)."""
test_path = "/tmp/binary_file.bin"
test_path = f"{_SANDBOX_TMP_DIR}/binary_file.bin"
# Create binary content with all byte values
test_content = bytes(range(256))
@@ -145,8 +147,8 @@ class BaseSandboxIntegrationTest(ABC):
def test_partial_success_upload(self, sandbox: SandboxBackendProtocol) -> None:
"""Test that batch upload supports partial success."""
files = [
("/tmp/valid_upload.txt", b"Valid content"),
("/tmp/another_valid.txt", b"Another valid"),
(f"{_SANDBOX_TMP_DIR}/valid_upload.txt", b"Valid content"),
(f"{_SANDBOX_TMP_DIR}/another_valid.txt", b"Another valid"),
]
upload_responses = sandbox.upload_files(files)
@@ -161,12 +163,12 @@ class BaseSandboxIntegrationTest(ABC):
def test_partial_success_download(self, sandbox: SandboxBackendProtocol) -> None:
"""Test that batch download supports partial success."""
# Create one valid file
valid_path = "/tmp/valid_file.txt"
valid_path = f"{_SANDBOX_TMP_DIR}/valid_file.txt"
valid_content = b"Valid"
sandbox.upload_files([(valid_path, valid_content)])
# Request both valid and invalid files
paths = [valid_path, "/tmp/does_not_exist.txt"]
paths = [valid_path, f"{_SANDBOX_TMP_DIR}/does_not_exist.txt"]
download_responses = sandbox.download_files(paths)
assert len(download_responses) == 2
@@ -175,7 +177,7 @@ class BaseSandboxIntegrationTest(ABC):
assert download_responses[0].content == valid_content
assert download_responses[0].error is None
# Second should fail
assert download_responses[1].path == "/tmp/does_not_exist.txt"
assert download_responses[1].path == f"{_SANDBOX_TMP_DIR}/does_not_exist.txt"
assert download_responses[1].content is None
assert download_responses[1].error is not None
@@ -188,10 +190,10 @@ class BaseSandboxIntegrationTest(ABC):
Expected behavior: download_files should return FileDownloadResponse with
error='file_not_found' when the requested file doesn't exist.
"""
responses = sandbox.download_files(["/tmp/nonexistent_test_file.txt"])
responses = sandbox.download_files([f"{_SANDBOX_TMP_DIR}/nonexistent_test_file.txt"])
assert len(responses) == 1
assert responses[0].path == "/tmp/nonexistent_test_file.txt"
assert responses[0].path == f"{_SANDBOX_TMP_DIR}/nonexistent_test_file.txt"
assert responses[0].content is None
assert responses[0].error == "file_not_found"
@@ -205,12 +207,12 @@ class BaseSandboxIntegrationTest(ABC):
error='is_directory' when trying to download a directory as a file.
"""
# Create a directory
sandbox.execute("mkdir -p /tmp/test_directory")
sandbox.execute(f"mkdir -p {_SANDBOX_TMP_DIR}/test_directory")
responses = sandbox.download_files(["/tmp/test_directory"])
responses = sandbox.download_files([f"{_SANDBOX_TMP_DIR}/test_directory"])
assert len(responses) == 1
assert responses[0].path == "/tmp/test_directory"
assert responses[0].path == f"{_SANDBOX_TMP_DIR}/test_directory"
assert responses[0].content is None
assert responses[0].error == "is_directory"
@@ -229,13 +231,15 @@ class BaseSandboxIntegrationTest(ABC):
"""
# Try to upload to a path where the parent is a file, not a directory
# First create a file
sandbox.upload_files([("/tmp/parent_is_file.txt", b"I am a file")])
sandbox.upload_files([(f"{_SANDBOX_TMP_DIR}/parent_is_file.txt", b"I am a file")])
# Now try to upload as if parent_is_file.txt were a directory
responses = sandbox.upload_files([("/tmp/parent_is_file.txt/child.txt", b"child")])
responses = sandbox.upload_files(
[(f"{_SANDBOX_TMP_DIR}/parent_is_file.txt/child.txt", b"child")]
)
assert len(responses) == 1
assert responses[0].path == "/tmp/parent_is_file.txt/child.txt"
assert responses[0].path == f"{_SANDBOX_TMP_DIR}/parent_is_file.txt/child.txt"
# Could be parent_not_found or invalid_path depending on implementation
assert responses[0].error in ("parent_not_found", "invalid_path")
@@ -249,10 +253,10 @@ class BaseSandboxIntegrationTest(ABC):
error='invalid_path' for malformed paths (null bytes, invalid chars, etc).
"""
# Test with null byte (invalid in most filesystems)
responses = sandbox.upload_files([("/tmp/file\x00name.txt", b"content")])
responses = sandbox.upload_files([(f"{_SANDBOX_TMP_DIR}/file\x00name.txt", b"content")])
assert len(responses) == 1
assert responses[0].path == "/tmp/file\x00name.txt"
assert responses[0].path == f"{_SANDBOX_TMP_DIR}/file\x00name.txt"
assert responses[0].error == "invalid_path"
@pytest.mark.skip(
@@ -265,10 +269,10 @@ class BaseSandboxIntegrationTest(ABC):
error='invalid_path' for malformed paths (null bytes, invalid chars, etc).
"""
# Test with null byte (invalid in most filesystems)
responses = sandbox.download_files(["/tmp/file\x00name.txt"])
responses = sandbox.download_files([f"{_SANDBOX_TMP_DIR}/file\x00name.txt"])
assert len(responses) == 1
assert responses[0].path == "/tmp/file\x00name.txt"
assert responses[0].path == f"{_SANDBOX_TMP_DIR}/file\x00name.txt"
assert responses[0].content is None
assert responses[0].error == "invalid_path"
@@ -282,13 +286,13 @@ class BaseSandboxIntegrationTest(ABC):
an appropriate error. The exact behavior depends on the sandbox provider.
"""
# Create a directory
sandbox.execute("mkdir -p /tmp/test_dir_upload")
sandbox.execute(f"mkdir -p {_SANDBOX_TMP_DIR}/test_dir_upload")
# Try to upload a file with the same name as the directory
responses = sandbox.upload_files([("/tmp/test_dir_upload", b"file content")])
responses = sandbox.upload_files([(f"{_SANDBOX_TMP_DIR}/test_dir_upload", b"file content")])
assert len(responses) == 1
assert responses[0].path == "/tmp/test_dir_upload"
assert responses[0].path == f"{_SANDBOX_TMP_DIR}/test_dir_upload"
# Behavior depends on implementation - just verify we get a response

View File

@@ -12,6 +12,8 @@ All tests run on a single sandbox instance (class-scoped fixture)
to avoid the overhead of spinning up multiple containers.
"""
# ruff: noqa: S108
from collections.abc import Iterator
import pytest
@@ -79,7 +81,11 @@ class TestSandboxOperations:
def test_write_special_characters(self, sandbox: SandboxBackendProtocol) -> None:
"""Test writing content with special characters and escape sequences."""
test_path = "/tmp/test_sandbox_ops/special.txt"
content = "Special chars: $VAR, `command`, $(subshell), 'quotes', \"quotes\"\nTab\there\nBackslash: \\"
content = (
"Special chars: $VAR, `command`, $(subshell), 'quotes', \"quotes\"\n"
"Tab\there\n"
"Backslash: \\"
)
result = sandbox.write(test_path, content)
@@ -238,7 +244,11 @@ class TestSandboxOperations:
def test_read_unicode_content(self, sandbox: SandboxBackendProtocol) -> None:
"""Test reading a file with unicode content."""
test_path = "/tmp/test_sandbox_ops/unicode_read.txt"
content = "Hello 👋 世界\nПривет мир\nمرحبا العالم"
content = (
"Hello 👋 世界\n"
"\u041f\u0440\u0438\u0432\u0435\u0442 \u043c\u0438\u0440\n"
"\u0645\u0631\u062d\u0628\u0627 \u0627\u0644\u0639\u0627\u0644\u0645"
)
sandbox.write(test_path, content)
result = sandbox.read(test_path)
@@ -734,7 +744,10 @@ class TestSandboxOperations:
"""Test grep with unicode pattern and content."""
base_dir = "/tmp/test_sandbox_ops/grep_unicode"
sandbox.execute(f"mkdir -p {base_dir}")
sandbox.write(f"{base_dir}/unicode.txt", "Hello 世界\nПривет мир\n测试 pattern")
sandbox.write(
f"{base_dir}/unicode.txt",
"Hello 世界\n\u041f\u0440\u0438\u0432\u0435\u0442 \u043c\u0438\u0440\n测试 pattern",
)
result = sandbox.grep_raw("世界", path=base_dir)

View File

@@ -70,7 +70,7 @@ class TestValidateSkillName:
"/etc/passwd",
"/home/user/.ssh",
"\\Windows\\System32",
"/tmp/exploit",
"/tmp/exploit", # noqa: S108
]
for name in malicious_names:
is_valid, error = _validate_name(name)

View File

@@ -1,5 +1,6 @@
"""Tests for autocomplete fuzzy search functionality."""
from pathlib import Path
from unittest.mock import MagicMock
import pytest
@@ -16,6 +17,8 @@ from deepagents_cli.widgets.autocomplete import (
_path_depth,
)
SampleFiles = list[str]
class TestFuzzyScore:
"""Tests for the _fuzzy_score function."""
@@ -65,7 +68,7 @@ class TestFuzzySearch:
"""Tests for the _fuzzy_search function."""
@pytest.fixture
def sample_files(self):
def sample_files(self) -> SampleFiles:
"""Sample file list for testing."""
return [
"README.md",
@@ -80,39 +83,39 @@ class TestFuzzySearch:
"docs/api.md",
]
def test_empty_query_returns_root_files_first(self, sample_files):
def test_empty_query_returns_root_files_first(self, sample_files: SampleFiles) -> None:
"""Empty query returns files sorted by depth, then name."""
results = _fuzzy_search("", sample_files, limit=5)
# Root level files should come first
assert results[0] in ["README.md", "setup.py"]
assert all("/" not in r for r in results[:2]) # First items are root level
def test_exact_match_ranked_first(self, sample_files):
def test_exact_match_ranked_first(self, sample_files: SampleFiles) -> None:
"""Exact filename matches are ranked first."""
results = _fuzzy_search("main", sample_files, limit=5)
assert "src/main.py" in results[:2]
def test_filters_dotfiles_by_default(self, sample_files):
def test_filters_dotfiles_by_default(self, sample_files: SampleFiles) -> None:
"""Dotfiles are filtered out by default."""
results = _fuzzy_search("git", sample_files, limit=10)
assert not any(".git" in r for r in results)
def test_includes_dotfiles_when_query_starts_with_dot(self, sample_files):
def test_includes_dotfiles_when_query_starts_with_dot(self, sample_files: SampleFiles) -> None:
"""Dotfiles included when query starts with '.'."""
results = _fuzzy_search(".git", sample_files, limit=10, include_dotfiles=True)
assert any(".git" in r for r in results)
def test_respects_limit(self, sample_files):
def test_respects_limit(self, sample_files: SampleFiles) -> None:
"""Results respect the limit parameter."""
results = _fuzzy_search("", sample_files, limit=3)
assert len(results) <= 3
def test_filters_low_score_matches(self, sample_files):
def test_filters_low_score_matches(self, sample_files: SampleFiles) -> None:
"""Low score matches are filtered out."""
results = _fuzzy_search("xyznonexistent", sample_files, limit=10)
assert len(results) == 0
def test_utils_matches_multiple_files(self, sample_files):
def test_utils_matches_multiple_files(self, sample_files: SampleFiles) -> None:
"""Query matching multiple files returns all matches."""
results = _fuzzy_search("utils", sample_files, limit=10)
assert len(results) >= 2
@@ -145,7 +148,7 @@ class TestHelperFunctions:
class TestFindProjectRoot:
"""Tests for _find_project_root function."""
def test_finds_git_root(self, tmp_path):
def test_finds_git_root(self, tmp_path: Path) -> None:
"""Finds .git directory and returns its parent."""
# Create nested structure with .git at root
git_dir = tmp_path / ".git"
@@ -156,7 +159,7 @@ class TestFindProjectRoot:
result = _find_project_root(nested)
assert result == tmp_path
def test_returns_start_path_when_no_git(self, tmp_path):
def test_returns_start_path_when_no_git(self, tmp_path: Path) -> None:
"""Returns start path when no .git found."""
nested = tmp_path / "some" / "path"
nested.mkdir(parents=True)
@@ -165,7 +168,7 @@ class TestFindProjectRoot:
# Should return the path itself (or a parent) since no .git exists
assert result == nested or nested.is_relative_to(result)
def test_handles_root_level_git(self, tmp_path):
def test_handles_root_level_git(self, tmp_path: Path) -> None:
"""Handles .git at the start path itself."""
git_dir = tmp_path / ".git"
git_dir.mkdir()
@@ -178,29 +181,32 @@ class TestSlashCommandController:
"""Tests for SlashCommandController."""
@pytest.fixture
def mock_view(self):
def mock_view(self) -> MagicMock:
"""Create a mock CompletionView."""
view = MagicMock()
return view
return MagicMock()
@pytest.fixture
def controller(self, mock_view):
def controller(self, mock_view: MagicMock) -> SlashCommandController:
"""Create a SlashCommandController with mock view."""
return SlashCommandController(SLASH_COMMANDS, mock_view)
def test_can_handle_slash_prefix(self, controller):
def test_can_handle_slash_prefix(self, controller: SlashCommandController) -> None:
"""Handles text starting with /."""
assert controller.can_handle("/", 1) is True
assert controller.can_handle("/hel", 4) is True
assert controller.can_handle("/help", 5) is True
def test_cannot_handle_non_slash(self, controller):
def test_cannot_handle_non_slash(self, controller: SlashCommandController) -> None:
"""Does not handle text not starting with /."""
assert controller.can_handle("hello", 5) is False
assert controller.can_handle("", 0) is False
assert controller.can_handle("test /cmd", 9) is False
def test_filters_commands_by_prefix(self, controller, mock_view):
def test_filters_commands_by_prefix(
self,
controller: SlashCommandController,
mock_view: MagicMock,
) -> None:
"""Filters commands based on typed prefix."""
controller.on_text_changed("/hel", 4)
@@ -209,7 +215,11 @@ class TestSlashCommandController:
suggestions = mock_view.render_completion_suggestions.call_args[0][0]
assert any("/help" in s[0] for s in suggestions)
def test_shows_all_commands_on_slash_only(self, controller, mock_view):
def test_shows_all_commands_on_slash_only(
self,
controller: SlashCommandController,
mock_view: MagicMock,
) -> None:
"""Shows all commands when just / is typed."""
controller.on_text_changed("/", 1)
@@ -217,7 +227,11 @@ class TestSlashCommandController:
suggestions = mock_view.render_completion_suggestions.call_args[0][0]
assert len(suggestions) == len(SLASH_COMMANDS)
def test_clears_on_no_match(self, controller, mock_view):
def test_clears_on_no_match(
self,
controller: SlashCommandController,
mock_view: MagicMock,
) -> None:
"""Clears suggestions when no commands match after having suggestions."""
# First get some suggestions
controller.on_text_changed("/h", 2)
@@ -227,7 +241,11 @@ class TestSlashCommandController:
controller.on_text_changed("/xyz", 4)
mock_view.clear_completion_suggestions.assert_called()
def test_reset_clears_state(self, controller, mock_view):
def test_reset_clears_state(
self,
controller: SlashCommandController,
mock_view: MagicMock,
) -> None:
"""Reset clears suggestions and state."""
controller.on_text_changed("/h", 2)
controller.reset()
@@ -239,41 +257,41 @@ class TestFuzzyFileControllerCanHandle:
"""Tests for FuzzyFileController.can_handle method."""
@pytest.fixture
def mock_view(self):
def mock_view(self) -> MagicMock:
"""Create a mock CompletionView."""
return MagicMock()
@pytest.fixture
def controller(self, mock_view, tmp_path):
def controller(self, mock_view: MagicMock, tmp_path: Path) -> FuzzyFileController:
"""Create a FuzzyFileController."""
return FuzzyFileController(mock_view, cwd=tmp_path)
def test_handles_at_symbol(self, controller):
def test_handles_at_symbol(self, controller: FuzzyFileController) -> None:
"""Handles text with @ symbol."""
assert controller.can_handle("@", 1) is True
assert controller.can_handle("@file", 5) is True
assert controller.can_handle("look at @src/main.py", 20) is True
def test_handles_at_mid_text(self, controller):
def test_handles_at_mid_text(self, controller: FuzzyFileController) -> None:
"""Handles @ in middle of text."""
assert controller.can_handle("check @file", 11) is True
assert controller.can_handle("see @", 5) is True
def test_no_handle_without_at(self, controller):
def test_no_handle_without_at(self, controller: FuzzyFileController) -> None:
"""Does not handle text without @."""
assert controller.can_handle("hello", 5) is False
assert controller.can_handle("", 0) is False
def test_no_handle_at_after_cursor(self, controller):
def test_no_handle_at_after_cursor(self, controller: FuzzyFileController) -> None:
"""Does not handle @ that's after cursor position."""
assert controller.can_handle("hello @file", 5) is False
def test_no_handle_space_after_at(self, controller):
def test_no_handle_space_after_at(self, controller: FuzzyFileController) -> None:
"""Does not handle @ followed by space before cursor."""
assert controller.can_handle("@ file", 6) is False
assert controller.can_handle("@file name", 10) is False
def test_invalid_cursor_positions(self, controller):
def test_invalid_cursor_positions(self, controller: FuzzyFileController) -> None:
"""Handles invalid cursor positions gracefully."""
assert controller.can_handle("@file", 0) is False
assert controller.can_handle("@file", -1) is False
@@ -284,35 +302,35 @@ class TestMultiCompletionManager:
"""Tests for MultiCompletionManager."""
@pytest.fixture
def mock_view(self):
def mock_view(self) -> MagicMock:
"""Create a mock CompletionView."""
return MagicMock()
@pytest.fixture
def manager(self, mock_view, tmp_path):
def manager(self, mock_view: MagicMock, tmp_path: Path) -> MultiCompletionManager:
"""Create a MultiCompletionManager with both controllers."""
slash_ctrl = SlashCommandController(SLASH_COMMANDS, mock_view)
file_ctrl = FuzzyFileController(mock_view, cwd=tmp_path)
return MultiCompletionManager([slash_ctrl, file_ctrl])
def test_activates_slash_controller_for_slash(self, manager):
def test_activates_slash_controller_for_slash(self, manager: MultiCompletionManager) -> None:
"""Activates slash controller for / prefix."""
manager.on_text_changed("/help", 5)
assert manager._active is not None
assert isinstance(manager._active, SlashCommandController)
def test_activates_file_controller_for_at(self, manager):
def test_activates_file_controller_for_at(self, manager: MultiCompletionManager) -> None:
"""Activates file controller for @ prefix."""
manager.on_text_changed("@file", 5)
assert manager._active is not None
assert isinstance(manager._active, FuzzyFileController)
def test_no_active_for_plain_text(self, manager):
def test_no_active_for_plain_text(self, manager: MultiCompletionManager) -> None:
"""No controller active for plain text."""
manager.on_text_changed("hello world", 11)
assert manager._active is None
def test_switches_controllers(self, manager):
def test_switches_controllers(self, manager: MultiCompletionManager) -> None:
"""Switches between controllers as input changes."""
manager.on_text_changed("/cmd", 4)
assert isinstance(manager._active, SlashCommandController)
@@ -320,7 +338,7 @@ class TestMultiCompletionManager:
manager.on_text_changed("@file", 5)
assert isinstance(manager._active, FuzzyFileController)
def test_reset_clears_active(self, manager):
def test_reset_clears_active(self, manager: MultiCompletionManager) -> None:
"""Reset clears active controller."""
manager.on_text_changed("/cmd", 4)
manager.reset()

View File

@@ -29,10 +29,10 @@ class FixedGenericFakeChatModel(GenericFakeChatModel):
def bind_tools(
self,
tools: Sequence[dict[str, Any] | type | Callable | BaseTool],
_tools: Sequence[dict[str, Any] | type | Callable | BaseTool],
*,
tool_choice: str | None = None,
**kwargs: Any,
_tool_choice: str | None = None,
**_kwargs: Any,
) -> Runnable[LanguageModelInput, AIMessage]:
"""Override bind_tools to return self."""
return self
@@ -65,8 +65,9 @@ def mock_settings(tmp_path: Path, assistant_id: str = "test-agent") -> Generator
mock_settings_obj.ensure_user_skills_dir.return_value = skills_dir
mock_settings_obj.get_project_skills_dir.return_value = None
# Mock methods that get called during agent execution to return real Path objects
# This prevents MagicMock objects from being stored in state (which would fail serialization)
# Mock methods that get called during agent execution to return real Path objects.
# This prevents MagicMock objects from being stored in state
# (which would fail serialization).
def get_user_agent_md_path(agent_id: str) -> Path:
return tmp_path / "agents" / agent_id / "agent.md"
@@ -114,7 +115,7 @@ class TestDeepAgentsCLIEndToEnd:
)
# Create a CLI agent with the fake model
agent, backend = create_cli_agent(
agent, _ = create_cli_agent(
model=model,
assistant_id="test-agent",
tools=[],
@@ -168,7 +169,7 @@ class TestDeepAgentsCLIEndToEnd:
)
# Create a CLI agent with the fake model and sample_tool
agent, backend = create_cli_agent(
agent, _ = create_cli_agent(
model=model,
assistant_id="test-agent",
tools=[sample_tool],
@@ -224,7 +225,7 @@ class TestDeepAgentsCLIEndToEnd:
)
# Create a CLI agent with the fake model
agent, backend = create_cli_agent(
agent, _ = create_cli_agent(
model=model,
assistant_id="test-agent",
tools=[],
@@ -284,7 +285,7 @@ class TestDeepAgentsCLIEndToEnd:
)
# Create a CLI agent with the fake model and sample_tool
agent, backend = create_cli_agent(
agent, _ = create_cli_agent(
model=model,
assistant_id="test-agent",
tools=[sample_tool],
@@ -325,7 +326,7 @@ class TestDeepAgentsCLIEndToEnd:
)
# Create a CLI agent
agent, backend = create_cli_agent(
_, backend = create_cli_agent(
model=model,
assistant_id="test-agent",
tools=[],

View File

@@ -3,6 +3,7 @@
import asyncio
import json
import sqlite3
from pathlib import Path
from unittest.mock import patch
import pytest
@@ -34,7 +35,7 @@ class TestThreadFunctions:
"""Tests for thread query functions."""
@pytest.fixture
def temp_db(self, tmp_path):
def temp_db(self, tmp_path: Path) -> Path:
"""Create a temporary database with test data."""
db_path = tmp_path / "test_sessions.db"
@@ -81,7 +82,8 @@ class TestThreadFunctions:
for tid, agent, updated in threads:
metadata = json.dumps({"agent_name": agent, "updated_at": updated})
conn.execute(
"INSERT INTO checkpoints (thread_id, checkpoint_ns, checkpoint_id, metadata) VALUES (?, '', ?, ?)",
"INSERT INTO checkpoints (thread_id, checkpoint_ns, checkpoint_id, metadata) "
"VALUES (?, '', ?, ?)",
(tid, f"cp_{tid}", metadata),
)
@@ -90,7 +92,7 @@ class TestThreadFunctions:
return db_path
def test_list_threads_empty(self, tmp_path):
def test_list_threads_empty(self, tmp_path: Path) -> None:
"""List returns empty when no threads exist."""
db_path = tmp_path / "empty.db"
# Create empty db with table structure
@@ -110,38 +112,38 @@ class TestThreadFunctions:
threads = asyncio.run(sessions.list_threads())
assert threads == []
def test_list_threads(self, temp_db):
def test_list_threads(self, temp_db: Path) -> None:
"""List returns all threads."""
with patch.object(sessions, "get_db_path", return_value=temp_db):
threads = asyncio.run(sessions.list_threads())
assert len(threads) == 3
def test_list_threads_filter_by_agent(self, temp_db):
def test_list_threads_filter_by_agent(self, temp_db: Path) -> None:
"""List filters by agent name."""
with patch.object(sessions, "get_db_path", return_value=temp_db):
threads = asyncio.run(sessions.list_threads(agent_name="agent1"))
assert len(threads) == 2
assert all(t["agent_name"] == "agent1" for t in threads)
def test_list_threads_limit(self, temp_db):
def test_list_threads_limit(self, temp_db: Path) -> None:
"""List respects limit."""
with patch.object(sessions, "get_db_path", return_value=temp_db):
threads = asyncio.run(sessions.list_threads(limit=2))
assert len(threads) == 2
def test_get_most_recent(self, temp_db):
def test_get_most_recent(self, temp_db: Path) -> None:
"""Get most recent returns latest thread."""
with patch.object(sessions, "get_db_path", return_value=temp_db):
tid = asyncio.run(sessions.get_most_recent())
assert tid is not None
def test_get_most_recent_filter(self, temp_db):
def test_get_most_recent_filter(self, temp_db: Path) -> None:
"""Get most recent filters by agent."""
with patch.object(sessions, "get_db_path", return_value=temp_db):
tid = asyncio.run(sessions.get_most_recent(agent_name="agent2"))
assert tid == "thread2"
def test_get_most_recent_empty(self, tmp_path):
def test_get_most_recent_empty(self, tmp_path: Path) -> None:
"""Get most recent returns None when empty."""
db_path = tmp_path / "empty.db"
# Create empty db with table structure
@@ -161,36 +163,36 @@ class TestThreadFunctions:
tid = asyncio.run(sessions.get_most_recent())
assert tid is None
def test_thread_exists(self, temp_db):
def test_thread_exists(self, temp_db: Path) -> None:
"""Thread exists returns True for existing thread."""
with patch.object(sessions, "get_db_path", return_value=temp_db):
assert asyncio.run(sessions.thread_exists("thread1")) is True
def test_thread_not_exists(self, temp_db):
def test_thread_not_exists(self, temp_db: Path) -> None:
"""Thread exists returns False for non-existing thread."""
with patch.object(sessions, "get_db_path", return_value=temp_db):
assert asyncio.run(sessions.thread_exists("nonexistent")) is False
def test_get_thread_agent(self, temp_db):
def test_get_thread_agent(self, temp_db: Path) -> None:
"""Get thread agent returns correct agent name."""
with patch.object(sessions, "get_db_path", return_value=temp_db):
agent = asyncio.run(sessions.get_thread_agent("thread1"))
assert agent == "agent1"
def test_get_thread_agent_not_found(self, temp_db):
def test_get_thread_agent_not_found(self, temp_db: Path) -> None:
"""Get thread agent returns None for non-existing thread."""
with patch.object(sessions, "get_db_path", return_value=temp_db):
agent = asyncio.run(sessions.get_thread_agent("nonexistent"))
assert agent is None
def test_delete_thread(self, temp_db):
def test_delete_thread(self, temp_db: Path) -> None:
"""Delete thread removes thread."""
with patch.object(sessions, "get_db_path", return_value=temp_db):
result = asyncio.run(sessions.delete_thread("thread1"))
assert result is True
assert asyncio.run(sessions.thread_exists("thread1")) is False
def test_delete_thread_not_found(self, temp_db):
def test_delete_thread_not_found(self, temp_db: Path) -> None:
"""Delete thread returns False for non-existing thread."""
with patch.object(sessions, "get_db_path", return_value=temp_db):
result = asyncio.run(sessions.delete_thread("nonexistent"))
@@ -200,10 +202,10 @@ class TestThreadFunctions:
class TestGetCheckpointer:
"""Tests for get_checkpointer async context manager."""
def test_returns_async_sqlite_saver(self, tmp_path):
def test_returns_async_sqlite_saver(self, tmp_path: Path) -> None:
"""Get checkpointer returns AsyncSqliteSaver."""
async def _test():
async def _test() -> None:
db_path = tmp_path / "test.db"
with patch.object(sessions, "get_db_path", return_value=db_path):
async with sessions.get_checkpointer() as cp:

View File

@@ -1,4 +1,4 @@
"""DeepAgents package."""
"""DeepAgents 패키지."""
from deepagents.graph import create_deep_agent
from deepagents.middleware.filesystem import FilesystemMiddleware

View File

@@ -1,4 +1,4 @@
"""Memory backends for pluggable file storage."""
"""플러그인 가능한 파일 저장을 위한 메모리 백엔드입니다."""
from deepagents.backends.composite import CompositeBackend
from deepagents.backends.filesystem import FilesystemBackend

View File

@@ -1,17 +1,20 @@
"""Composite backend that routes file operations by path prefix.
"""경로 prefix에 따라 파일 작업을 라우팅하는 합성(Composite) 백엔드입니다.
Routes operations to different backends based on path prefixes. Use this when you
need different storage strategies for different paths (e.g., state for temp files,
persistent store for memories).
경로(prefix) 규칙에 따라 서로 다른 백엔드로 파일 작업을 위임합니다. 예를 들어,
임시 파일은 `StateBackend`에, 장기 메모리는 `StoreBackend`에 저장하는 식으로
경로별 저장 전략을 분리하고 싶을 때 사용합니다.
Examples:
예시:
```python
from deepagents.backends.composite import CompositeBackend
from deepagents.backends.state import StateBackend
from deepagents.backends.store import StoreBackend
runtime = make_runtime()
composite = CompositeBackend(default=StateBackend(runtime), routes={"/memories/": StoreBackend(runtime)})
composite = CompositeBackend(
default=StateBackend(runtime),
routes={"/memories/": StoreBackend(runtime)},
)
composite.write("/temp.txt", "ephemeral")
composite.write("/memories/note.md", "persistent")
@@ -35,19 +38,22 @@ from deepagents.backends.state import StateBackend
class CompositeBackend(BackendProtocol):
"""Routes file operations to different backends by path prefix.
"""경로 prefix에 따라 파일 작업을 서로 다른 백엔드로 위임합니다.
Matches paths against route prefixes (longest first) and delegates to the
corresponding backend. Unmatched paths use the default backend.
경로를 route prefix(길이가 긴 것부터)와 매칭한 후 해당 백엔드로 위임합니다.
어떤 prefix에도 매칭되지 않으면 `default` 백엔드를 사용합니다.
Attributes:
default: Backend for paths that don't match any route.
routes: Map of path prefixes to backends (e.g., {"/memories/": store_backend}).
sorted_routes: Routes sorted by length (longest first) for correct matching.
default: 어떤 route에도 매칭되지 않을 때 사용할 백엔드.
routes: 경로 prefix → 백엔드 매핑(예: `{"/memories/": store_backend}`).
sorted_routes: 올바른 매칭을 위해 길이 기준(내림차순)으로 정렬한 routes.
Examples:
예시:
```python
composite = CompositeBackend(default=StateBackend(runtime), routes={"/memories/": StoreBackend(runtime), "/cache/": StoreBackend(runtime)})
composite = CompositeBackend(
default=StateBackend(runtime),
routes={"/memories/": StoreBackend(runtime), "/cache/": StoreBackend(runtime)},
)
composite.write("/temp.txt", "data")
composite.write("/memories/note.txt", "data")
@@ -59,37 +65,37 @@ class CompositeBackend(BackendProtocol):
default: BackendProtocol | StateBackend,
routes: dict[str, BackendProtocol],
) -> None:
"""Initialize composite backend.
"""합성 백엔드를 초기화합니다.
Args:
default: Backend for paths that don't match any route.
routes: Map of path prefixes to backends. Prefixes must start with "/"
and should end with "/" (e.g., "/memories/").
default: 어떤 route에도 매칭되지 않을 때 사용할 백엔드.
routes: 경로 prefix → 백엔드 매핑. prefix는 반드시 `/`로 시작해야 하며,
일반적으로 `/`로 끝나도록 지정하는 것을 권장합니다(예: `/memories/`).
"""
# Default backend
# 기본(default) 백엔드
self.default = default
# Virtual routes
# 가상(virtual) route 설정
self.routes = routes
# Sort routes by length (longest first) for correct prefix matching
# prefix 매칭이 올바르게 동작하도록 길이 기준(내림차순)으로 정렬
self.sorted_routes = sorted(routes.items(), key=lambda x: len(x[0]), reverse=True)
def _get_backend_and_key(self, key: str) -> tuple[BackendProtocol, str]:
"""Get backend for path and strip route prefix.
"""경로에 맞는 백엔드를 찾고 route prefix를 제거한 경로를 반환합니다.
Args:
key: File path to route.
key: 라우팅할 파일 경로.
Returns:
Tuple of (backend, stripped_path). The stripped path has the route
prefix removed but keeps the leading slash.
`(backend, stripped_path)` 튜플. `stripped_path`는 route prefix를 제거하되
선행 `/`는 유지합니다.
"""
# Check routes in order of length (longest first)
# route prefix 길이가 긴 것부터 확인(가장 구체적인 route 우선)
for prefix, backend in self.sorted_routes:
if key.startswith(prefix):
# Strip full prefix and ensure a leading slash remains
# e.g., "/memories/notes.txt" → "/notes.txt"; "/memories/" → "/"
# prefix를 제거하되, 선행 슬래시를 유지
# 예: "/memories/notes.txt" → "/notes.txt", "/memories/" → "/"
suffix = key[len(prefix) :]
stripped_key = f"/{suffix}" if suffix else "/"
return backend, stripped_key
@@ -97,28 +103,29 @@ class CompositeBackend(BackendProtocol):
return self.default, key
def ls_info(self, path: str) -> list[FileInfo]:
"""List directory contents (non-recursive).
"""디렉토리 내용을 나열합니다(비재귀).
If path matches a route, lists only that backend. If path is "/", aggregates
default backend plus virtual route directories. Otherwise lists default backend.
- `path`가 특정 route에 매칭되면 해당 백엔드만 나열합니다.
- `path == "/"`이면 default 백엔드 + 가상 route 디렉토리를 함께 반환합니다.
- 그 외에는 default 백엔드만 나열합니다.
Args:
path: Absolute directory path starting with "/".
path: `/`로 시작하는 절대 디렉토리 경로.
Returns:
List of FileInfo dicts. Directories have trailing "/" and is_dir=True.
Route prefixes are restored in returned paths.
FileInfo 딕셔너리 리스트. 디렉토리는 `path`가 `/`로 끝나며 `is_dir=True`입니다.
route를 통해 조회된 경우 반환 경로에는 원래의 route prefix가 복원됩니다.
Examples:
예시:
```python
infos = composite.ls_info("/")
infos = composite.ls_info("/memories/")
```
"""
# Check if path matches a specific route
# path가 특정 route에 매칭되는지 확인
for route_prefix, backend in self.sorted_routes:
if path.startswith(route_prefix.rstrip("/")):
# Query only the matching routed backend
# 매칭된 routed backend만 조회
suffix = path[len(route_prefix) :]
search_path = f"/{suffix}" if suffix else "/"
infos = backend.ls_info(search_path)
@@ -129,12 +136,12 @@ class CompositeBackend(BackendProtocol):
prefixed.append(fi)
return prefixed
# At root, aggregate default and all routed backends
# 루트에서는 default + 모든 route 디렉토리를 합산
if path == "/":
results: list[FileInfo] = []
results.extend(self.default.ls_info(path))
for route_prefix, backend in self.sorted_routes:
# Add the route itself as a directory (e.g., /memories/)
# route 자체를 디렉토리로 추가(예: /memories/)
results.append(
{
"path": route_prefix,
@@ -147,15 +154,15 @@ class CompositeBackend(BackendProtocol):
results.sort(key=lambda x: x.get("path", ""))
return results
# Path doesn't match a route: query only default backend
# 어떤 route에도 매칭되지 않으면 default backend만 조회
return self.default.ls_info(path)
async def als_info(self, path: str) -> list[FileInfo]:
"""Async version of ls_info."""
# Check if path matches a specific route
"""`ls_info`의 async 버전입니다."""
# path가 특정 route에 매칭되는지 확인
for route_prefix, backend in self.sorted_routes:
if path.startswith(route_prefix.rstrip("/")):
# Query only the matching routed backend
# 매칭된 routed backend만 조회
suffix = path[len(route_prefix) :]
search_path = f"/{suffix}" if suffix else "/"
infos = await backend.als_info(search_path)
@@ -166,12 +173,12 @@ class CompositeBackend(BackendProtocol):
prefixed.append(fi)
return prefixed
# At root, aggregate default and all routed backends
# 루트에서는 default + 모든 route 디렉토리를 합산
if path == "/":
results: list[FileInfo] = []
results.extend(await self.default.als_info(path))
for route_prefix, backend in self.sorted_routes:
# Add the route itself as a directory (e.g., /memories/)
# route 자체를 디렉토리로 추가(예: /memories/)
results.append(
{
"path": route_prefix,
@@ -184,7 +191,7 @@ class CompositeBackend(BackendProtocol):
results.sort(key=lambda x: x.get("path", ""))
return results
# Path doesn't match a route: query only default backend
# 어떤 route에도 매칭되지 않으면 default backend만 조회
return await self.default.als_info(path)
def read(
@@ -193,15 +200,15 @@ class CompositeBackend(BackendProtocol):
offset: int = 0,
limit: int = 2000,
) -> str:
"""Read file content, routing to appropriate backend.
"""파일 내용을 읽습니다(경로에 맞는 백엔드로 라우팅).
Args:
file_path: Absolute file path.
offset: Line offset to start reading from (0-indexed).
limit: Maximum number of lines to read.
file_path: 절대 파일 경로.
offset: 읽기 시작 라인 오프셋(0-index).
limit: 최대 읽기 라인 수.
Returns:
Formatted file content with line numbers, or error message.
라인 번호가 포함된 포맷 문자열 또는 오류 메시지.
"""
backend, stripped_key = self._get_backend_and_key(file_path)
return backend.read(stripped_key, offset=offset, limit=limit)
@@ -212,7 +219,7 @@ class CompositeBackend(BackendProtocol):
offset: int = 0,
limit: int = 2000,
) -> str:
"""Async version of read."""
"""`read`의 async 버전입니다."""
backend, stripped_key = self._get_backend_and_key(file_path)
return await backend.aread(stripped_key, offset=offset, limit=limit)
@@ -222,10 +229,12 @@ class CompositeBackend(BackendProtocol):
path: str | None = None,
glob: str | None = None,
) -> list[GrepMatch] | str:
"""Search files for regex pattern.
"""파일에서 정규식 패턴을 검색합니다.
Routes to backends based on path: specific route searches one backend,
"/" or None searches all backends, otherwise searches default backend.
`path`에 따라 검색 대상 백엔드를 라우팅합니다.
- 특정 route에 매칭되면 해당 백엔드만 검색
- `"/"` 또는 `None`이면 default + 모든 route 백엔드를 검색하여 병합
- 그 외는 default 백엔드만 검색
Args:
pattern: Regex pattern to search for.
@@ -244,7 +253,7 @@ class CompositeBackend(BackendProtocol):
matches = composite.grep_raw("import", path="/", glob="*.py")
```
"""
# If path targets a specific route, search only that backend
# path가 특정 route를 가리키면 해당 백엔드만 검색
for route_prefix, backend in self.sorted_routes:
if path is not None and path.startswith(route_prefix.rstrip("/")):
search_path = path[len(route_prefix) - 1 :]
@@ -253,20 +262,20 @@ class CompositeBackend(BackendProtocol):
return raw
return [{**m, "path": f"{route_prefix[:-1]}{m['path']}"} for m in raw]
# If path is None or "/", search default and all routed backends and merge
# Otherwise, search only the default backend
# path None 또는 "/"이면 default + 모든 route 백엔드를 검색하여 병합
# 그 외에는 default 백엔드만 검색
if path is None or path == "/":
all_matches: list[GrepMatch] = []
raw_default = self.default.grep_raw(pattern, path, glob) # type: ignore[attr-defined]
if isinstance(raw_default, str):
# This happens if error occurs
# 에러가 발생하면 문자열 오류 메시지가 반환됩니다.
return raw_default
all_matches.extend(raw_default)
for route_prefix, backend in self.routes.items():
raw = backend.grep_raw(pattern, "/", glob)
if isinstance(raw, str):
# This happens if error occurs
# 에러가 발생하면 문자열 오류 메시지가 반환됩니다.
return raw
all_matches.extend({**m, "path": f"{route_prefix[:-1]}{m['path']}"} for m in raw)
@@ -280,11 +289,11 @@ class CompositeBackend(BackendProtocol):
path: str | None = None,
glob: str | None = None,
) -> list[GrepMatch] | str:
"""Async version of grep_raw.
"""`grep_raw`의 async 버전입니다.
See grep_raw() for detailed documentation on routing behavior and parameters.
라우팅 동작과 파라미터에 대한 자세한 설명은 `grep_raw()`를 참고하세요.
"""
# If path targets a specific route, search only that backend
# path가 특정 route를 가리키면 해당 백엔드만 검색
for route_prefix, backend in self.sorted_routes:
if path is not None and path.startswith(route_prefix.rstrip("/")):
search_path = path[len(route_prefix) - 1 :]
@@ -293,20 +302,20 @@ class CompositeBackend(BackendProtocol):
return raw
return [{**m, "path": f"{route_prefix[:-1]}{m['path']}"} for m in raw]
# If path is None or "/", search default and all routed backends and merge
# Otherwise, search only the default backend
# path None 또는 "/"이면 default + 모든 route 백엔드를 검색하여 병합
# 그 외에는 default 백엔드만 검색
if path is None or path == "/":
all_matches: list[GrepMatch] = []
raw_default = await self.default.agrep_raw(pattern, path, glob) # type: ignore[attr-defined]
if isinstance(raw_default, str):
# This happens if error occurs
# 에러가 발생하면 문자열 오류 메시지가 반환됩니다.
return raw_default
all_matches.extend(raw_default)
for route_prefix, backend in self.routes.items():
raw = await backend.agrep_raw(pattern, "/", glob)
if isinstance(raw, str):
# This happens if error occurs
# 에러가 발생하면 문자열 오류 메시지가 반환됩니다.
return raw
all_matches.extend({**m, "path": f"{route_prefix[:-1]}{m['path']}"} for m in raw)
@@ -336,24 +345,24 @@ class CompositeBackend(BackendProtocol):
return results
async def aglob_info(self, pattern: str, path: str = "/") -> list[FileInfo]:
"""Async version of glob_info."""
"""`glob_info`의 async 버전입니다."""
results: list[FileInfo] = []
# Route based on path, not pattern
# pattern이 아니라 path 기준으로 라우팅
for route_prefix, backend in self.sorted_routes:
if path.startswith(route_prefix.rstrip("/")):
search_path = path[len(route_prefix) - 1 :]
infos = await backend.aglob_info(pattern, search_path if search_path else "/")
return [{**fi, "path": f"{route_prefix[:-1]}{fi['path']}"} for fi in infos]
# Path doesn't match any specific route - search default backend AND all routed backends
# 어떤 route에도 매칭되지 않으면 default + 모든 route 백엔드를 검색
results.extend(await self.default.aglob_info(pattern, path))
for route_prefix, backend in self.routes.items():
infos = await backend.aglob_info(pattern, "/")
results.extend({**fi, "path": f"{route_prefix[:-1]}{fi['path']}"} for fi in infos)
# Deterministic ordering
# deterministic ordering
results.sort(key=lambda x: x.get("path", ""))
return results
@@ -362,7 +371,7 @@ class CompositeBackend(BackendProtocol):
file_path: str,
content: str,
) -> WriteResult:
"""Create a new file, routing to appropriate backend.
"""새 파일을 생성합니다(경로에 맞는 백엔드로 라우팅).
Args:
file_path: Absolute file path.
@@ -373,7 +382,8 @@ class CompositeBackend(BackendProtocol):
"""
backend, stripped_key = self._get_backend_and_key(file_path)
res = backend.write(stripped_key, content)
# If this is a state-backed update and default has state, merge so listings reflect changes
# state-backed 업데이트(그리고 default state를 갖는 경우)면,
# listing이 변경을 반영하도록 default state에도 병합합니다.
if res.files_update:
try:
runtime = getattr(self.default, "runtime", None)
@@ -391,10 +401,11 @@ class CompositeBackend(BackendProtocol):
file_path: str,
content: str,
) -> WriteResult:
"""Async version of write."""
"""`write`의 async 버전입니다."""
backend, stripped_key = self._get_backend_and_key(file_path)
res = await backend.awrite(stripped_key, content)
# If this is a state-backed update and default has state, merge so listings reflect changes
# state-backed 업데이트(그리고 default state를 갖는 경우)면,
# listing이 변경을 반영하도록 default state에도 병합합니다.
if res.files_update:
try:
runtime = getattr(self.default, "runtime", None)
@@ -414,7 +425,7 @@ class CompositeBackend(BackendProtocol):
new_string: str,
replace_all: bool = False,
) -> EditResult:
"""Edit a file, routing to appropriate backend.
"""파일을 편집합니다(경로에 맞는 백엔드로 라우팅).
Args:
file_path: Absolute file path.
@@ -446,7 +457,7 @@ class CompositeBackend(BackendProtocol):
new_string: str,
replace_all: bool = False,
) -> EditResult:
"""Async version of edit."""
"""`edit`의 async 버전입니다."""
backend, stripped_key = self._get_backend_and_key(file_path)
res = await backend.aedit(stripped_key, old_string, new_string, replace_all=replace_all)
if res.files_update:
@@ -465,7 +476,7 @@ class CompositeBackend(BackendProtocol):
self,
command: str,
) -> ExecuteResponse:
"""Execute shell command via default backend.
"""Default 백엔드를 통해 셸 커맨드를 실행합니다.
Args:
command: Shell command to execute.
@@ -486,8 +497,8 @@ class CompositeBackend(BackendProtocol):
if isinstance(self.default, SandboxBackendProtocol):
return self.default.execute(command)
# This shouldn't be reached if the runtime check in the execute tool works correctly,
# but we include it as a safety fallback.
# execute 도구의 런타임 체크가 제대로 동작한다면 여기에 도달하지 않아야 하지만,
# 안전장치(fallback)로 예외를 둡니다.
raise NotImplementedError(
"Default backend doesn't support command execution (SandboxBackendProtocol). "
"To enable execution, provide a default backend that implements SandboxBackendProtocol."
@@ -497,19 +508,19 @@ class CompositeBackend(BackendProtocol):
self,
command: str,
) -> ExecuteResponse:
"""Async version of execute."""
"""`execute`의 async 버전입니다."""
if isinstance(self.default, SandboxBackendProtocol):
return await self.default.aexecute(command)
# This shouldn't be reached if the runtime check in the execute tool works correctly,
# but we include it as a safety fallback.
# execute 도구의 런타임 체크가 제대로 동작한다면 여기에 도달하지 않아야 하지만,
# 안전장치(fallback)로 예외를 둡니다.
raise NotImplementedError(
"Default backend doesn't support command execution (SandboxBackendProtocol). "
"To enable execution, provide a default backend that implements SandboxBackendProtocol."
)
def upload_files(self, files: list[tuple[str, bytes]]) -> list[FileUploadResponse]:
"""Upload multiple files, batching by backend for efficiency.
"""여러 파일을 업로드합니다(백엔드별로 배치 처리하여 효율화).
Groups files by their target backend, calls each backend's upload_files
once with all files for that backend, then merges results in original order.
@@ -521,10 +532,10 @@ class CompositeBackend(BackendProtocol):
List of FileUploadResponse objects, one per input file.
Response order matches input order.
"""
# Pre-allocate result list
# 결과 리스트를 미리 할당
results: list[FileUploadResponse | None] = [None] * len(files)
# Group files by backend, tracking original indices
# 원래 인덱스를 유지하면서 백엔드별로 파일을 그룹화
from collections import defaultdict
backend_batches: dict[BackendProtocol, list[tuple[int, str, bytes]]] = defaultdict(list)
@@ -533,16 +544,16 @@ class CompositeBackend(BackendProtocol):
backend, stripped_path = self._get_backend_and_key(path)
backend_batches[backend].append((idx, stripped_path, content))
# Process each backend's batch
# 백엔드별 배치를 처리
for backend, batch in backend_batches.items():
# Extract data for backend call
# 백엔드 호출에 필요한 데이터로 분해
indices, stripped_paths, contents = zip(*batch, strict=False)
batch_files = list(zip(stripped_paths, contents, strict=False))
# Call backend once with all its files
# 해당 백엔드로 1회 호출(배치)
batch_responses = backend.upload_files(batch_files)
# Place responses at original indices with original paths
# 원래 경로/인덱스 위치에 응답을 채웁니다.
for i, orig_idx in enumerate(indices):
results[orig_idx] = FileUploadResponse(
path=files[orig_idx][0], # Original path
@@ -552,27 +563,27 @@ class CompositeBackend(BackendProtocol):
return results # type: ignore[return-value]
async def aupload_files(self, files: list[tuple[str, bytes]]) -> list[FileUploadResponse]:
"""Async version of upload_files."""
# Pre-allocate result list
"""`upload_files`의 async 버전입니다."""
# 결과 리스트를 미리 할당
results: list[FileUploadResponse | None] = [None] * len(files)
# Group files by backend, tracking original indices
# 원래 인덱스를 유지하면서 백엔드별로 파일을 그룹화
backend_batches: dict[BackendProtocol, list[tuple[int, str, bytes]]] = defaultdict(list)
for idx, (path, content) in enumerate(files):
backend, stripped_path = self._get_backend_and_key(path)
backend_batches[backend].append((idx, stripped_path, content))
# Process each backend's batch
# 백엔드별 배치를 처리
for backend, batch in backend_batches.items():
# Extract data for backend call
# 백엔드 호출에 필요한 데이터로 분해
indices, stripped_paths, contents = zip(*batch, strict=False)
batch_files = list(zip(stripped_paths, contents, strict=False))
# Call backend once with all its files
# 해당 백엔드로 1회 호출(배치)
batch_responses = await backend.aupload_files(batch_files)
# Place responses at original indices with original paths
# 원래 경로/인덱스 위치에 응답을 채웁니다.
for i, orig_idx in enumerate(indices):
results[orig_idx] = FileUploadResponse(
path=files[orig_idx][0], # Original path
@@ -582,7 +593,7 @@ class CompositeBackend(BackendProtocol):
return results # type: ignore[return-value]
def download_files(self, paths: list[str]) -> list[FileDownloadResponse]:
"""Download multiple files, batching by backend for efficiency.
"""여러 파일을 다운로드합니다(백엔드별로 배치 처리하여 효율화).
Groups paths by their target backend, calls each backend's download_files
once with all paths for that backend, then merges results in original order.
@@ -594,7 +605,7 @@ class CompositeBackend(BackendProtocol):
List of FileDownloadResponse objects, one per input path.
Response order matches input order.
"""
# Pre-allocate result list
# 결과 리스트를 미리 할당
results: list[FileDownloadResponse | None] = [None] * len(paths)
backend_batches: dict[BackendProtocol, list[tuple[int, str]]] = defaultdict(list)
@@ -603,15 +614,15 @@ class CompositeBackend(BackendProtocol):
backend, stripped_path = self._get_backend_and_key(path)
backend_batches[backend].append((idx, stripped_path))
# Process each backend's batch
# 백엔드별 배치를 처리
for backend, batch in backend_batches.items():
# Extract data for backend call
# 백엔드 호출에 필요한 데이터로 분해
indices, stripped_paths = zip(*batch, strict=False)
# Call backend once with all its paths
# 해당 백엔드로 1회 호출(배치)
batch_responses = backend.download_files(list(stripped_paths))
# Place responses at original indices with original paths
# 원래 경로/인덱스 위치에 응답을 채웁니다.
for i, orig_idx in enumerate(indices):
results[orig_idx] = FileDownloadResponse(
path=paths[orig_idx], # Original path
@@ -622,8 +633,8 @@ class CompositeBackend(BackendProtocol):
return results # type: ignore[return-value]
async def adownload_files(self, paths: list[str]) -> list[FileDownloadResponse]:
"""Async version of download_files."""
# Pre-allocate result list
"""`download_files`의 async 버전입니다."""
# 결과 리스트를 미리 할당
results: list[FileDownloadResponse | None] = [None] * len(paths)
backend_batches: dict[BackendProtocol, list[tuple[int, str]]] = defaultdict(list)
@@ -632,15 +643,15 @@ class CompositeBackend(BackendProtocol):
backend, stripped_path = self._get_backend_and_key(path)
backend_batches[backend].append((idx, stripped_path))
# Process each backend's batch
# 백엔드별 배치를 처리
for backend, batch in backend_batches.items():
# Extract data for backend call
# 백엔드 호출에 필요한 데이터로 분해
indices, stripped_paths = zip(*batch, strict=False)
# Call backend once with all its paths
# 해당 백엔드로 1회 호출(배치)
batch_responses = await backend.adownload_files(list(stripped_paths))
# Place responses at original indices with original paths
# 원래 경로/인덱스 위치에 응답을 채웁니다.
for i, orig_idx in enumerate(indices):
results[orig_idx] = FileDownloadResponse(
path=paths[orig_idx], # Original path

View File

@@ -1,10 +1,10 @@
"""FilesystemBackend: Read and write files directly from the filesystem.
"""FilesystemBackend: 파일 시스템에서 직접 파일을 읽고 씁니다.
Security and search upgrades:
- Secure path resolution with root containment when in virtual_mode (sandboxed to cwd)
- Prevent symlink-following on file I/O using O_NOFOLLOW when available
- Ripgrep-powered grep with JSON parsing, plus Python fallback with regex
and optional glob include filtering, while preserving virtual path behavior
보안 및 검색 기능 개선:
- virtual_mode(sandboxed to cwd)에서 루트 컨테인먼트를 통한 보안 경로 해결
- 사용 가능한 경우 O_NOFOLLOW를 사용하여 파일 I/O에서 심볼로우 방지
- JSON 파싱을 지원하는 Ripgrep 기반 grep, 정규식을 사용하는 Python 폴백 및
선택적 glob 포함 필터링, 가상 경로 동작 유지
"""
import json
@@ -33,11 +33,11 @@ from deepagents.backends.utils import (
class FilesystemBackend(BackendProtocol):
"""Backend that reads and writes files directly from the filesystem.
"""로컬 파일 시스템에서 직접 파일을 읽고/쓰는 백엔드입니다.
Files are accessed using their actual filesystem paths. Relative paths are
resolved relative to the current working directory. Content is read/written
as plain text, and metadata (timestamps) are derived from filesystem stats.
파일은 실제 파일 시스템 경로로 접근합니다. 상대 경로는 현재 작업 디렉토리(`cwd`) 기준으로
해석되며, 콘텐츠는 일반 텍스트로 읽고/씁니다. 메타데이터(타임스탬프 등)는 파일 시스템
stat 정보를 기반으로 계산합니다.
"""
def __init__(
@@ -46,7 +46,7 @@ class FilesystemBackend(BackendProtocol):
virtual_mode: bool = False,
max_file_size_mb: int = 10,
) -> None:
"""Initialize filesystem backend.
"""파일 시스템 백엔드를 초기화합니다.
Args:
root_dir: Optional root directory for file operations. If provided,
@@ -58,7 +58,7 @@ class FilesystemBackend(BackendProtocol):
self.max_file_size_bytes = max_file_size_mb * 1024 * 1024
def _resolve_path(self, key: str) -> Path:
"""Resolve a file path with security checks.
"""보안 검사를 포함해 파일 경로를 해석(resolve)합니다.
When virtual_mode=True, treat incoming paths as virtual absolute paths under
self.cwd, disallow traversal (.., ~) and ensure resolved path stays within root.
@@ -88,7 +88,7 @@ class FilesystemBackend(BackendProtocol):
return (self.cwd / path).resolve()
def ls_info(self, path: str) -> list[FileInfo]:
"""List files and directories in the specified directory (non-recursive).
"""지정한 디렉토리 바로 아래의 파일/폴더를 나열합니다(비재귀).
Args:
path: Absolute directory path to list files from.
@@ -103,12 +103,12 @@ class FilesystemBackend(BackendProtocol):
results: list[FileInfo] = []
# Convert cwd to string for comparison
# 비교를 위해 cwd를 문자열로 변환
cwd_str = str(self.cwd)
if not cwd_str.endswith("/"):
cwd_str += "/"
# List only direct children (non-recursive)
# 직계 자식만 나열(비재귀)
try:
for child_path in dir_path.iterdir():
try:
@@ -120,7 +120,7 @@ class FilesystemBackend(BackendProtocol):
abs_path = str(child_path)
if not self.virtual_mode:
# Non-virtual mode: use absolute paths
# non-virtual 모드: 절대 경로를 사용
if is_file:
try:
st = child_path.stat()
@@ -148,14 +148,14 @@ class FilesystemBackend(BackendProtocol):
except OSError:
results.append({"path": abs_path + "/", "is_dir": True})
else:
# Virtual mode: strip cwd prefix
# virtual 모드: cwd prefix를 제거하여 가상 경로로 변환
if abs_path.startswith(cwd_str):
relative_path = abs_path[len(cwd_str) :]
elif abs_path.startswith(str(self.cwd)):
# Handle case where cwd doesn't end with /
# cwd가 `/`로 끝나지 않는 케이스 보정
relative_path = abs_path[len(str(self.cwd)) :].lstrip("/")
else:
# Path is outside cwd, return as-is or skip
# cwd 밖의 경로: 그대로 반환하거나 스킵
relative_path = abs_path
virt_path = "/" + relative_path
@@ -189,7 +189,7 @@ class FilesystemBackend(BackendProtocol):
except (OSError, PermissionError):
pass
# Keep deterministic order by path
# 경로 기준으로 deterministic order 유지
results.sort(key=lambda x: x.get("path", ""))
return results
@@ -199,7 +199,7 @@ class FilesystemBackend(BackendProtocol):
offset: int = 0,
limit: int = 2000,
) -> str:
"""Read file content with line numbers.
"""파일을 읽어 라인 번호가 포함된 문자열로 반환합니다.
Args:
file_path: Absolute or relative file path.
@@ -215,7 +215,7 @@ class FilesystemBackend(BackendProtocol):
return f"Error: File '{file_path}' not found"
try:
# Open with O_NOFOLLOW where available to avoid symlink traversal
# 가능하면 O_NOFOLLOW로 열어 심볼릭 링크를 통한 우회를 방지
fd = os.open(resolved_path, os.O_RDONLY | getattr(os, "O_NOFOLLOW", 0))
with os.fdopen(fd, "r", encoding="utf-8") as f:
content = f.read()
@@ -241,8 +241,9 @@ class FilesystemBackend(BackendProtocol):
file_path: str,
content: str,
) -> WriteResult:
"""Create a new file with content.
Returns WriteResult. External storage sets files_update=None.
"""새 파일을 생성하고 내용을 씁니다.
`WriteResult`를 반환합니다. 외부 스토리지 백엔드는 `files_update=None`을 사용합니다.
"""
resolved_path = self._resolve_path(file_path)
@@ -250,10 +251,10 @@ class FilesystemBackend(BackendProtocol):
return WriteResult(error=f"Cannot write to {file_path} because it already exists. Read and then make an edit, or write to a new path.")
try:
# Create parent directories if needed
# 필요하면 상위 디렉토리를 생성
resolved_path.parent.mkdir(parents=True, exist_ok=True)
# Prefer O_NOFOLLOW to avoid writing through symlinks
# 가능하면 O_NOFOLLOW를 사용해 심볼릭 링크를 통한 쓰기 우회를 방지
flags = os.O_WRONLY | os.O_CREAT | os.O_TRUNC
if hasattr(os, "O_NOFOLLOW"):
flags |= os.O_NOFOLLOW
@@ -272,8 +273,9 @@ class FilesystemBackend(BackendProtocol):
new_string: str,
replace_all: bool = False,
) -> EditResult:
"""Edit a file by replacing string occurrences.
Returns EditResult. External storage sets files_update=None.
"""파일 내 문자열을 치환하여 편집합니다.
`EditResult`를 반환합니다. 외부 스토리지 백엔드는 `files_update=None`을 사용합니다.
"""
resolved_path = self._resolve_path(file_path)
@@ -281,7 +283,7 @@ class FilesystemBackend(BackendProtocol):
return EditResult(error=f"Error: File '{file_path}' not found")
try:
# Read securely
# 안전하게 읽기
fd = os.open(resolved_path, os.O_RDONLY | getattr(os, "O_NOFOLLOW", 0))
with os.fdopen(fd, "r", encoding="utf-8") as f:
content = f.read()
@@ -293,7 +295,7 @@ class FilesystemBackend(BackendProtocol):
new_content, occurrences = result
# Write securely
# 안전하게 쓰기
flags = os.O_WRONLY | os.O_TRUNC
if hasattr(os, "O_NOFOLLOW"):
flags |= os.O_NOFOLLOW
@@ -311,7 +313,7 @@ class FilesystemBackend(BackendProtocol):
path: str | None = None,
glob: str | None = None,
) -> list[GrepMatch] | str:
# Validate regex
# 정규식 검증
try:
re.compile(pattern)
except re.error as e:
@@ -480,7 +482,7 @@ class FilesystemBackend(BackendProtocol):
return results
def upload_files(self, files: list[tuple[str, bytes]]) -> list[FileUploadResponse]:
"""Upload multiple files to the filesystem.
"""여러 파일을 파일 시스템에 업로드합니다.
Args:
files: List of (path, content) tuples where content is bytes.
@@ -494,7 +496,7 @@ class FilesystemBackend(BackendProtocol):
try:
resolved_path = self._resolve_path(path)
# Create parent directories if needed
# 필요하면 상위 디렉토리를 생성
resolved_path.parent.mkdir(parents=True, exist_ok=True)
flags = os.O_WRONLY | os.O_CREAT | os.O_TRUNC
@@ -510,17 +512,18 @@ class FilesystemBackend(BackendProtocol):
except PermissionError:
responses.append(FileUploadResponse(path=path, error="permission_denied"))
except (ValueError, OSError) as e:
# ValueError from _resolve_path for path traversal, OSError for other file errors
# _resolve_path에서 경로 탐색(path traversal)일 때 ValueError,
# 그 외 파일 오류는 OSError가 발생할 수 있습니다.
if isinstance(e, ValueError) or "invalid" in str(e).lower():
responses.append(FileUploadResponse(path=path, error="invalid_path"))
else:
# Generic error fallback
# 일반적인 fallback
responses.append(FileUploadResponse(path=path, error="invalid_path"))
return responses
def download_files(self, paths: list[str]) -> list[FileDownloadResponse]:
"""Download multiple files from the filesystem.
"""여러 파일을 파일 시스템에서 다운로드합니다.
Args:
paths: List of file paths to download.
@@ -532,8 +535,7 @@ class FilesystemBackend(BackendProtocol):
for path in paths:
try:
resolved_path = self._resolve_path(path)
# Use flags to optionally prevent symlink following if
# supported by the OS
# OS가 지원하면, 플래그로 심볼릭 링크 추적을 방지합니다.
fd = os.open(resolved_path, os.O_RDONLY | getattr(os, "O_NOFOLLOW", 0))
with os.fdopen(fd, "rb") as f:
content = f.read()

View File

@@ -1,8 +1,8 @@
"""Protocol definition for pluggable memory backends.
"""플러그인 가능한 메모리 백엔드를 위한 프로토콜 정의입니다.
This module defines the BackendProtocol that all backend implementations
must follow. Backends can store files in different locations (state, filesystem,
database, etc.) and provide a uniform interface for file operations.
이 모듈은 모든 백엔드 구현이 따라야 하는 BackendProtocol을 정의합니다.
백엔드는 다양한 위치(상태, 파일 시스템, 데이터베이스 등)에 파일을 저장할 수 있으며
파일 작업을 위한 균일한 인터페이스를 제공합니다.
"""
import abc
@@ -15,42 +15,41 @@ from langchain.tools import ToolRuntime
from typing_extensions import TypedDict
FileOperationError = Literal[
"file_not_found", # Download: file doesn't exist
"permission_denied", # Both: access denied
"is_directory", # Download: tried to download directory as file
"invalid_path", # Both: path syntax malformed (parent dir missing, invalid chars)
"file_not_found", # Download: 파일이 존재하지 않음
"permission_denied", # Both: 접근 거부됨
"is_directory", # Download: 디렉토리를 파일로 다운로드하려고 시도
"invalid_path", # Both: 경로 문법이 잘못되었습니다(상위 디렉토리 누락, 잘못된 문자)
]
"""Standardized error codes for file upload/download operations.
"""파일 업로드/다운로드 작업을 위한 표준화된 오류 코드입니다.
These represent common, recoverable errors that an LLM can understand and potentially fix:
- file_not_found: The requested file doesn't exist (download)
- parent_not_found: The parent directory doesn't exist (upload)
- permission_denied: Access denied for the operation
- is_directory: Attempted to download a directory as a file
- invalid_path: Path syntax is malformed or contains invalid characters
이는 LLM이 이해하고 잠재적으로 수정할 수 있는 일반적이고 복구 가능한 오류를 나타냅니다:
- file_not_found: 요청한 파일이 존재하지 않음(다운로드)
- parent_not_found: 상위 디렉토리가 존재하지 않음(업로드)
- permission_denied: 작업에 대한 접근 거부됨
- is_directory: 디렉토리를 파일로 다운로드하려고 시도
- invalid_path: 경로 문법이 잘못되었거나 잘못된 문자가 포함됨
"""
@dataclass
class FileDownloadResponse:
"""Result of a single file download operation.
"""단일 파일 다운로드 작업의 결과입니다.
The response is designed to allow partial success in batch operations.
The errors are standardized using FileOperationError literals
for certain recoverable conditions for use cases that involve
LLMs performing file operations.
이 응답은 배치 작업에서 부분적 성공을 허용하도록 설계되었습니다.
오류는 LLM이 파일 작업을 수행하는 사용 사례에서 복구 가능한
특정 조건에 대해 FileOperationError 리터럴을 사용하여 표준화됩니다.
Attributes:
path: The file path that was requested. Included for easy correlation
when processing batch results, especially useful for error messages.
content: File contents as bytes on success, None on failure.
error: Standardized error code on failure, None on success.
Uses FileOperationError literal for structured, LLM-actionable error reporting.
path: 요청된 파일 경로입니다. 배치 결과 처리 시 상관관계에 유용하며,
특히 오류 메시지에 유용합니다.
content: 성공 시 파일 내용(바이트), 실패 시 None.
error: 실패 시 표준화된 오류 코드, 성공 시 None.
구조화되고 LLM이 조치 가능한 오류 보고를 위해 FileOperationError 리터럴을 사용합니다.
Examples:
>>> # Success
>>> # 성공
>>> FileDownloadResponse(path="/app/config.json", content=b"{...}", error=None)
>>> # Failure
>>> # 실패
>>> FileDownloadResponse(path="/wrong/path.txt", content=None, error="file_not_found")
"""
@@ -61,23 +60,22 @@ class FileDownloadResponse:
@dataclass
class FileUploadResponse:
"""Result of a single file upload operation.
"""단일 파일 업로드 작업의 결과입니다.
The response is designed to allow partial success in batch operations.
The errors are standardized using FileOperationError literals
for certain recoverable conditions for use cases that involve
LLMs performing file operations.
이 응답은 배치 작업에서 부분적 성공을 허용하도록 설계되었습니다.
오류는 LLM이 파일 작업을 수행하는 사용 사례에서 복구 가능한
특정 조건에 대해 FileOperationError 리터럴을 사용하여 표준화됩니다.
Attributes:
path: The file path that was requested. Included for easy correlation
when processing batch results and for clear error messages.
error: Standardized error code on failure, None on success.
Uses FileOperationError literal for structured, LLM-actionable error reporting.
path: 요청된 파일 경로입니다. 배치 결과 처리 시 상관관계에 유용하며,
명확한 오류 메시지에 도움이 됩니다.
error: 실패 시 표준화된 오류 코드, 성공 시 None.
구조화되고 LLM이 조치 가능한 오류 보고를 위해 FileOperationError 리터럴을 사용합니다.
Examples:
>>> # Success
>>> # 성공
>>> FileUploadResponse(path="/app/data.txt", error=None)
>>> # Failure
>>> # 실패
>>> FileUploadResponse(path="/readonly/file.txt", error="permission_denied")
"""
@@ -86,10 +84,10 @@ class FileUploadResponse:
class FileInfo(TypedDict):
"""Structured file listing info.
"""파일 목록 조회 시 사용하는 구조화된 항목 정보입니다.
Minimal contract used across backends. Only "path" is required.
Other fields are best-effort and may be absent depending on backend.
백엔드 간 최소 계약(minimal contract)이며, `"path"`만 필수입니다.
나머지 필드는 best-effort로 제공되며, 백엔드에 따라 누락될 수 있습니다.
"""
path: str
@@ -99,7 +97,7 @@ class FileInfo(TypedDict):
class GrepMatch(TypedDict):
"""Structured grep match entry."""
"""grep 매칭 결과(구조화) 엔트리입니다."""
path: str
line: int
@@ -108,14 +106,15 @@ class GrepMatch(TypedDict):
@dataclass
class WriteResult:
"""Result from backend write operations.
"""백엔드 write 작업의 결과입니다.
Attributes:
error: Error message on failure, None on success.
path: Absolute path of written file, None on failure.
files_update: State update dict for checkpoint backends, None for external storage.
Checkpoint backends populate this with {file_path: file_data} for LangGraph state.
External backends set None (already persisted to disk/S3/database/etc).
error: 실패 시 오류 메시지, 성공 시 `None`.
path: 성공 시 작성된 파일의 절대 경로, 실패 시 `None`.
files_update: checkpoint 기반 백엔드에서는 state 업데이트 딕셔너리,
외부 스토리지 기반 백엔드에서는 `None`.
checkpoint 백엔드는 LangGraph state 업데이트를 위해 `{file_path: file_data}`를 채웁니다.
외부 백엔드는 `None`(이미 디스크/S3/DB 등에 영구 반영됨)을 사용합니다.
Examples:
>>> # Checkpoint storage
@@ -133,15 +132,16 @@ class WriteResult:
@dataclass
class EditResult:
"""Result from backend edit operations.
"""백엔드 edit 작업의 결과입니다.
Attributes:
error: Error message on failure, None on success.
path: Absolute path of edited file, None on failure.
files_update: State update dict for checkpoint backends, None for external storage.
Checkpoint backends populate this with {file_path: file_data} for LangGraph state.
External backends set None (already persisted to disk/S3/database/etc).
occurrences: Number of replacements made, None on failure.
error: 실패 시 오류 메시지, 성공 시 `None`.
path: 성공 시 수정된 파일의 절대 경로, 실패 시 `None`.
files_update: checkpoint 기반 백엔드에서는 state 업데이트 딕셔너리,
외부 스토리지 기반 백엔드에서는 `None`.
checkpoint 백엔드는 LangGraph state 업데이트를 위해 `{file_path: file_data}`를 채웁니다.
외부 백엔드는 `None`(이미 디스크/S3/DB 등에 영구 반영됨)을 사용합니다.
occurrences: 치환 횟수. 실패 시 `None`.
Examples:
>>> # Checkpoint storage
@@ -159,36 +159,38 @@ class EditResult:
class BackendProtocol(abc.ABC):
"""Protocol for pluggable memory backends (single, unified).
"""플러그인 가능한 메모리/파일 백엔드용 단일(unified) 프로토콜입니다.
Backends can store files in different locations (state, filesystem, database, etc.)
and provide a uniform interface for file operations.
백엔드는 상태(state), 로컬 파일 시스템, 데이터베이스 등 다양한 위치에 파일을 저장할 수 있으며,
파일 작업에 대해 일관된(uniform) 인터페이스를 제공합니다.
All file data is represented as dicts with the following structure:
모든 file data는 아래 구조의 딕셔너리로 표현합니다.
```python
{
"content": list[str], # Lines of text content
"created_at": str, # ISO format timestamp
"modified_at": str, # ISO format timestamp
"content": list[str], # 텍스트 라인 목록
"created_at": str, # ISO 형식 타임스탬프
"modified_at": str, # ISO 형식 타임스탬프
}
```
"""
def ls_info(self, path: str) -> list["FileInfo"]:
"""List all files in a directory with metadata.
"""디렉토리 내 파일/폴더를 메타데이터와 함께 나열합니다.
Args:
path: Absolute path to the directory to list. Must start with '/'.
path: 나열할 디렉토리의 절대 경로. 반드시 `/`로 시작해야 합니다.
Returns:
List of FileInfo dicts containing file metadata:
- `path` (required): Absolute file path
- `is_dir` (optional): True if directory
- `size` (optional): File size in bytes
- `modified_at` (optional): ISO 8601 timestamp
FileInfo 딕셔너리 리스트:
- `path` (필수): 절대 경로
- `is_dir` (선택): 디렉토리면 `True`
- `size` (선택): 바이트 단위 크기
- `modified_at` (선택): ISO 8601 타임스탬프
"""
async def als_info(self, path: str) -> list["FileInfo"]:
"""Async version of ls_info."""
"""`ls_info`의 async 버전입니다."""
return await asyncio.to_thread(self.ls_info, path)
def read(
@@ -197,25 +199,25 @@ class BackendProtocol(abc.ABC):
offset: int = 0,
limit: int = 2000,
) -> str:
"""Read file content with line numbers.
"""파일을 읽어 라인 번호가 포함된 문자열로 반환합니다.
Args:
file_path: Absolute path to the file to read. Must start with '/'.
offset: Line number to start reading from (0-indexed). Default: 0.
limit: Maximum number of lines to read. Default: 2000.
file_path: 읽을 파일의 절대 경로. 반드시 `/`로 시작해야 합니다.
offset: 읽기 시작 라인(0-index). 기본값: 0.
limit: 최대 읽기 라인 수. 기본값: 2000.
Returns:
String containing file content formatted with line numbers (cat -n format),
starting at line 1. Lines longer than 2000 characters are truncated.
라인 번호(`cat -n`) 형식으로 포맷된 파일 내용 문자열(라인 번호는 1부터 시작).
2000자를 초과하는 라인은 잘립니다.
Returns an error string if the file doesn't exist or can't be read.
파일이 없거나 읽을 수 없으면 오류 문자열을 반환합니다.
!!! note
- Use pagination (offset/limit) for large files to avoid context overflow
- First scan: `read(path, limit=100)` to see file structure
- Read more: `read(path, offset=100, limit=200)` for next section
- ALWAYS read a file before editing it
- If file exists but is empty, you'll receive a system reminder warning
- 큰 파일은 pagination(offset/limit)을 사용해 컨텍스트 오버플로우를 방지하세요.
- 첫 스캔: `read(path, limit=100)`으로 구조 파악
- 추가 읽기: `read(path, offset=100, limit=200)`으로 다음 구간
- 편집 전에는 반드시 파일을 먼저 읽어야 합니다.
- 파일이 비어 있으면 system reminder 경고가 반환될 수 있습니다.
"""
async def aread(
@@ -224,7 +226,7 @@ class BackendProtocol(abc.ABC):
offset: int = 0,
limit: int = 2000,
) -> str:
"""Async version of read."""
"""`read`의 async 버전입니다."""
return await asyncio.to_thread(self.read, file_path, offset, limit)
def grep_raw(
@@ -233,7 +235,7 @@ class BackendProtocol(abc.ABC):
path: str | None = None,
glob: str | None = None,
) -> list["GrepMatch"] | str:
"""Search for a literal text pattern in files.
"""파일에서 리터럴(비정규식) 텍스트 패턴을 검색합니다.
Args:
pattern: Literal string to search for (NOT regex).
@@ -259,12 +261,12 @@ class BackendProtocol(abc.ABC):
- "test[0-9].txt" - search test0.txt, test1.txt, etc.
Returns:
On success: list[GrepMatch] with structured results containing:
- path: Absolute file path
- line: Line number (1-indexed)
- text: Full line content containing the match
성공 시: 아래 필드를 가진 구조화 결과 `list[GrepMatch]`
- path: 절대 파일 경로
- line: 라인 번호(1-index)
- text: 매칭된 라인의 전체 텍스트
On error: str with error message (e.g., invalid path, permission denied)
실패 시: 오류 메시지 문자열(예: invalid path, permission denied)
"""
async def agrep_raw(
@@ -273,11 +275,11 @@ class BackendProtocol(abc.ABC):
path: str | None = None,
glob: str | None = None,
) -> list["GrepMatch"] | str:
"""Async version of grep_raw."""
"""`grep_raw`의 async 버전입니다."""
return await asyncio.to_thread(self.grep_raw, pattern, path, glob)
def glob_info(self, pattern: str, path: str = "/") -> list["FileInfo"]:
"""Find files matching a glob pattern.
"""Glob 패턴에 매칭되는 파일을 찾습니다.
Args:
pattern: Glob pattern with wildcards to match file paths.
@@ -291,11 +293,11 @@ class BackendProtocol(abc.ABC):
The pattern is applied relative to this path.
Returns:
list of FileInfo
FileInfo 리스트
"""
async def aglob_info(self, pattern: str, path: str = "/") -> list["FileInfo"]:
"""Async version of glob_info."""
"""`glob_info`의 async 버전입니다."""
return await asyncio.to_thread(self.glob_info, pattern, path)
def write(
@@ -303,7 +305,7 @@ class BackendProtocol(abc.ABC):
file_path: str,
content: str,
) -> WriteResult:
"""Write content to a new file in the filesystem, error if file exists.
"""새 파일을 생성하고 내용을 씁니다(동일 경로 파일이 이미 있으면 오류).
Args:
file_path: Absolute path where the file should be created.
@@ -319,7 +321,7 @@ class BackendProtocol(abc.ABC):
file_path: str,
content: str,
) -> WriteResult:
"""Async version of write."""
"""`write`의 async 버전입니다."""
return await asyncio.to_thread(self.write, file_path, content)
def edit(
@@ -329,7 +331,7 @@ class BackendProtocol(abc.ABC):
new_string: str,
replace_all: bool = False,
) -> EditResult:
"""Perform exact string replacements in an existing file.
"""기존 파일에서 정확한 문자열 매칭 기반 치환을 수행합니다.
Args:
file_path: Absolute path to the file to edit. Must start with '/'.
@@ -351,11 +353,11 @@ class BackendProtocol(abc.ABC):
new_string: str,
replace_all: bool = False,
) -> EditResult:
"""Async version of edit."""
"""`edit`의 async 버전입니다."""
return await asyncio.to_thread(self.edit, file_path, old_string, new_string, replace_all)
def upload_files(self, files: list[tuple[str, bytes]]) -> list[FileUploadResponse]:
"""Upload multiple files to the sandbox.
"""여러 파일을 샌드박스로 업로드합니다.
This API is designed to allow developers to use it either directly or
by exposing it to LLMs via custom tools.
@@ -380,11 +382,11 @@ class BackendProtocol(abc.ABC):
"""
async def aupload_files(self, files: list[tuple[str, bytes]]) -> list[FileUploadResponse]:
"""Async version of upload_files."""
"""`upload_files`의 async 버전입니다."""
return await asyncio.to_thread(self.upload_files, files)
def download_files(self, paths: list[str]) -> list[FileDownloadResponse]:
"""Download multiple files from the sandbox.
"""여러 파일을 샌드박스에서 다운로드합니다.
This API is designed to allow developers to use it either directly or
by exposing it to LLMs via custom tools.
@@ -399,41 +401,41 @@ class BackendProtocol(abc.ABC):
"""
async def adownload_files(self, paths: list[str]) -> list[FileDownloadResponse]:
"""Async version of download_files."""
"""`download_files`의 async 버전입니다."""
return await asyncio.to_thread(self.download_files, paths)
@dataclass
class ExecuteResponse:
"""Result of code execution.
"""코드 실행 결과입니다.
Simplified schema optimized for LLM consumption.
LLM이 소비하기 좋도록 단순화한 스키마입니다.
"""
output: str
"""Combined stdout and stderr output of the executed command."""
"""실행된 커맨드의 stdout+stderr 합쳐진 출력."""
exit_code: int | None = None
"""The process exit code. 0 indicates success, non-zero indicates failure."""
"""프로세스 종료 코드. 0은 성공, 0이 아니면 실패."""
truncated: bool = False
"""Whether the output was truncated due to backend limitations."""
"""백엔드 제한으로 출력이 잘렸는지 여부."""
class SandboxBackendProtocol(BackendProtocol):
"""Protocol for sandboxed backends with isolated runtime.
"""격리된 런타임을 제공하는 샌드박스 백엔드용 프로토콜입니다.
Sandboxed backends run in isolated environments (e.g., separate processes,
containers) and communicate via defined interfaces.
샌드박스 백엔드는 별도 프로세스/컨테이너 같은 격리된 환경에서 실행되며,
정해진 인터페이스를 통해 통신합니다.
"""
def execute(
self,
command: str,
) -> ExecuteResponse:
"""Execute a command in the process.
"""샌드박스 프로세스에서 커맨드를 실행합니다.
Simplified interface optimized for LLM consumption.
LLM 친화적으로 단순화된 인터페이스입니다.
Args:
command: Full shell command string to execute.
@@ -446,12 +448,12 @@ class SandboxBackendProtocol(BackendProtocol):
self,
command: str,
) -> ExecuteResponse:
"""Async version of execute."""
"""`execute`의 async 버전입니다."""
return await asyncio.to_thread(self.execute, command)
@property
def id(self) -> str:
"""Unique identifier for the sandbox backend instance."""
"""샌드박스 백엔드 인스턴스의 고유 식별자."""
BackendFactory: TypeAlias = Callable[[ToolRuntime], BackendProtocol]

View File

@@ -1,8 +1,8 @@
"""Base sandbox implementation with execute() as the only abstract method.
"""`execute()`만 구현하면 되는 샌드박스 기본 구현체입니다.
This module provides a base class that implements all SandboxBackendProtocol
methods using shell commands executed via execute(). Concrete implementations
only need to implement the execute() method.
이 모듈은 `execute()`로 실행되는 셸 커맨드를 이용해 `SandboxBackendProtocol`의 나머지 메서드를
기본 구현으로 제공하는 베이스 클래스를 포함합니다. 구체 구현체(concrete implementation)는
`execute()`만 구현하면 됩니다.
"""
from __future__ import annotations
@@ -139,10 +139,10 @@ for i, line in enumerate(selected_lines):
class BaseSandbox(SandboxBackendProtocol, ABC):
"""Base sandbox implementation with execute() as abstract method.
"""`execute()`를 추상 메서드로 두는 샌드박스 기본 구현체입니다.
This class provides default implementations for all protocol methods
using shell commands. Subclasses only need to implement execute().
셸 커맨드 기반으로 프로토콜 메서드들의 기본 구현을 제공하며,
서브클래스는 `execute()`만 구현하면 됩니다.
"""
@abstractmethod
@@ -150,7 +150,7 @@ class BaseSandbox(SandboxBackendProtocol, ABC):
self,
command: str,
) -> ExecuteResponse:
"""Execute a command in the sandbox and return ExecuteResponse.
"""샌드박스에서 커맨드를 실행하고 `ExecuteResponse`를 반환합니다.
Args:
command: Full shell command string to execute.
@@ -161,7 +161,7 @@ class BaseSandbox(SandboxBackendProtocol, ABC):
...
def ls_info(self, path: str) -> list[FileInfo]:
"""Structured listing with file metadata using os.scandir."""
"""`os.scandir`를 사용해 파일 메타데이터를 포함한 구조화 목록을 반환합니다."""
cmd = f"""python3 -c "
import os
import json
@@ -202,8 +202,8 @@ except PermissionError:
offset: int = 0,
limit: int = 2000,
) -> str:
"""Read file content with line numbers using a single shell command."""
# Use template for reading file with offset and limit
"""단일 셸 커맨드로 파일을 읽고 라인 번호가 포함된 문자열로 반환합니다."""
# offset/limit을 적용한 읽기 템플릿을 사용
cmd = _READ_COMMAND_TEMPLATE.format(file_path=file_path, offset=offset, limit=limit)
result = self.execute(cmd)
@@ -220,20 +220,23 @@ except PermissionError:
file_path: str,
content: str,
) -> WriteResult:
"""Create a new file. Returns WriteResult; error populated on failure."""
# Encode content as base64 to avoid any escaping issues
"""새 파일을 생성하고 내용을 씁니다.
실패 시 `WriteResult.error`가 채워진 형태로 반환합니다.
"""
# escaping 이슈를 피하기 위해 content를 base64로 인코딩합니다.
content_b64 = base64.b64encode(content.encode("utf-8")).decode("ascii")
# Single atomic check + write command
# 단일 커맨드에서 존재 여부 확인 + write를 원자적으로 수행
cmd = _WRITE_COMMAND_TEMPLATE.format(file_path=file_path, content_b64=content_b64)
result = self.execute(cmd)
# Check for errors (exit code or error message in output)
# 오류 확인(비정상 exit code 또는 output 내 Error 메시지)
if result.exit_code != 0 or "Error:" in result.output:
error_msg = result.output.strip() or f"Failed to write file '{file_path}'"
return WriteResult(error=error_msg)
# External storage - no files_update needed
# 외부 스토리지이므로 files_update는 필요 없음
return WriteResult(path=file_path, files_update=None)
def edit(
@@ -243,12 +246,12 @@ except PermissionError:
new_string: str,
replace_all: bool = False,
) -> EditResult:
"""Edit a file by replacing string occurrences. Returns EditResult."""
# Encode strings as base64 to avoid any escaping issues
"""파일 내 문자열을 치환하여 편집합니다."""
# escaping 이슈를 피하기 위해 문자열을 base64로 인코딩합니다.
old_b64 = base64.b64encode(old_string.encode("utf-8")).decode("ascii")
new_b64 = base64.b64encode(new_string.encode("utf-8")).decode("ascii")
# Use template for string replacement
# 문자열 치환 템플릿을 사용
cmd = _EDIT_COMMAND_TEMPLATE.format(file_path=file_path, old_b64=old_b64, new_b64=new_b64, replace_all=replace_all)
result = self.execute(cmd)
@@ -263,7 +266,7 @@ except PermissionError:
return EditResult(error=f"Error: File '{file_path}' not found")
count = int(output)
# External storage - no files_update needed
# 외부 스토리지이므로 files_update는 필요 없음
return EditResult(path=file_path, files_update=None, occurrences=count)
def grep_raw(
@@ -272,18 +275,18 @@ except PermissionError:
path: str | None = None,
glob: str | None = None,
) -> list[GrepMatch] | str:
"""Structured search results or error string for invalid input."""
"""구조화된 검색 결과 또는(입력이 잘못된 경우) 오류 문자열을 반환합니다."""
search_path = shlex.quote(path or ".")
# Build grep command to get structured output
# 구조화된 출력 파싱을 위한 grep 커맨드 구성
grep_opts = "-rHnF" # recursive, with filename, with line number, fixed-strings (literal)
# Add glob pattern if specified
# glob 패턴이 있으면 include 조건으로 추가
glob_pattern = ""
if glob:
glob_pattern = f"--include='{glob}'"
# Escape pattern for shell
# 셸에서 안전하게 쓰도록 pattern을 escape
pattern_escaped = shlex.quote(pattern)
cmd = f"grep {grep_opts} {glob_pattern} -e {pattern_escaped} {search_path} 2>/dev/null || true"
@@ -293,10 +296,10 @@ except PermissionError:
if not output:
return []
# Parse grep output into GrepMatch objects
# grep 출력 문자열을 GrepMatch 객체로 파싱
matches: list[GrepMatch] = []
for line in output.split("\n"):
# Format is: path:line_number:text
# 포맷: path:line_number:text
parts = line.split(":", 2)
if len(parts) >= 3:
matches.append(
@@ -310,8 +313,8 @@ except PermissionError:
return matches
def glob_info(self, pattern: str, path: str = "/") -> list[FileInfo]:
"""Structured glob matching returning FileInfo dicts."""
# Encode pattern and path as base64 to avoid escaping issues
"""Glob 매칭 결과를 구조화된 FileInfo 딕셔너리로 반환합니다."""
# escaping 이슈를 피하기 위해 pattern/path base64로 인코딩합니다.
pattern_b64 = base64.b64encode(pattern.encode("utf-8")).decode("ascii")
path_b64 = base64.b64encode(path.encode("utf-8")).decode("ascii")
@@ -322,7 +325,7 @@ except PermissionError:
if not output:
return []
# Parse JSON output into FileInfo dicts
# JSON 출력을 FileInfo 딕셔너리로 파싱
file_infos: list[FileInfo] = []
for line in output.split("\n"):
try:
@@ -341,11 +344,11 @@ except PermissionError:
@property
@abstractmethod
def id(self) -> str:
"""Unique identifier for the sandbox backend."""
"""샌드박스 백엔드 인스턴스의 고유 식별자."""
@abstractmethod
def upload_files(self, files: list[tuple[str, bytes]]) -> list[FileUploadResponse]:
"""Upload multiple files to the sandbox.
"""여러 파일을 샌드박스로 업로드합니다.
Implementations must support partial success - catch exceptions per-file
and return errors in FileUploadResponse objects rather than raising.
@@ -353,7 +356,7 @@ except PermissionError:
@abstractmethod
def download_files(self, paths: list[str]) -> list[FileDownloadResponse]:
"""Download multiple files from the sandbox.
"""여러 파일을 샌드박스에서 다운로드합니다.
Implementations must support partial success - catch exceptions per-file
and return errors in FileDownloadResponse objects rather than raising.

View File

@@ -1,4 +1,4 @@
"""StateBackend: Store files in LangGraph agent state (ephemeral)."""
"""StateBackend: LangGraph 에이전트 상태에 파일을 저장합니다(일시적)."""
from typing import TYPE_CHECKING
@@ -26,54 +26,54 @@ if TYPE_CHECKING:
class StateBackend(BackendProtocol):
"""Backend that stores files in agent state (ephemeral).
"""에이전트 상태(state)에 파일을 저장하는 백엔드입니다(일시적).
Uses LangGraph's state management and checkpointing. Files persist within
a conversation thread but not across threads. State is automatically
checkpointed after each agent step.
LangGraph의 state/checkpoint 메커니즘을 사용합니다. 파일은 동일한 스레드(대화)
내에서는 유지되지만, 스레드를 넘어 영구 저장되지는 않습니다. 또한 state는
각 에이전트 step 이후 자동으로 checkpoint 됩니다.
Special handling: Since LangGraph state must be updated via Command objects
(not direct mutation), operations return Command objects instead of None.
This is indicated by the uses_state=True flag.
주의: LangGraph state는 직접 mutation이 아니라 `Command` 객체를 통해 업데이트해야 합니다.
따라서 일부 작업은 `None` 대신 `Command`에 적용될 업데이트 정보를 담아 반환합니다.
(코드에서는 `uses_state=True` 플래그로 표현)
"""
def __init__(self, runtime: "ToolRuntime"):
"""Initialize StateBackend with runtime."""
"""`ToolRuntime`으로 StateBackend를 초기화합니다."""
self.runtime = runtime
def ls_info(self, path: str) -> list[FileInfo]:
"""List files and directories in the specified directory (non-recursive).
"""지정한 디렉토리 바로 아래의 파일/폴더를 나열합니다(비재귀).
Args:
path: Absolute path to directory.
path: 디렉토리 절대 경로.
Returns:
List of FileInfo-like dicts for files and directories directly in the directory.
Directories have a trailing / in their path and is_dir=True.
해당 디렉토리의 직계 자식 항목에 대한 FileInfo 딕셔너리 리스트.
디렉토리는 경로가 `/`로 끝나며 `is_dir=True`입니다.
"""
files = self.runtime.state.get("files", {})
infos: list[FileInfo] = []
subdirs: set[str] = set()
# Normalize path to have trailing slash for proper prefix matching
# prefix 매칭을 위해 trailing slash를 갖는 형태로 정규화
normalized_path = path if path.endswith("/") else path + "/"
for k, fd in files.items():
# Check if file is in the specified directory or a subdirectory
# 지정 디렉토리(또는 하위 디렉토리)에 속하는지 확인
if not k.startswith(normalized_path):
continue
# Get the relative path after the directory
# 디렉토리 이후의 상대 경로
relative = k[len(normalized_path) :]
# If relative path contains '/', it's in a subdirectory
# 상대 경로에 `/`가 있으면 하위 디렉토리에 있는 파일입니다.
if "/" in relative:
# Extract the immediate subdirectory name
# 즉시 하위 디렉토리 이름만 추출
subdir_name = relative.split("/")[0]
subdirs.add(normalized_path + subdir_name + "/")
continue
# This is a file directly in the current directory
# 현재 디렉토리 바로 아래에 있는 파일
size = len("\n".join(fd.get("content", [])))
infos.append(
{
@@ -84,7 +84,7 @@ class StateBackend(BackendProtocol):
}
)
# Add directories to the results
# 디렉토리 항목을 결과에 추가
for subdir in sorted(subdirs):
infos.append(
{
@@ -104,15 +104,15 @@ class StateBackend(BackendProtocol):
offset: int = 0,
limit: int = 2000,
) -> str:
"""Read file content with line numbers.
"""파일을 읽어 라인 번호가 포함된 문자열로 반환합니다.
Args:
file_path: Absolute file path.
offset: Line offset to start reading from (0-indexed).
limit: Maximum number of lines to read.
file_path: 절대 파일 경로.
offset: 읽기 시작 라인 오프셋(0-index).
limit: 최대 읽기 라인 수.
Returns:
Formatted file content with line numbers, or error message.
라인 번호가 포함된 포맷 문자열 또는 오류 메시지.
"""
files = self.runtime.state.get("files", {})
file_data = files.get(file_path)
@@ -127,8 +127,9 @@ class StateBackend(BackendProtocol):
file_path: str,
content: str,
) -> WriteResult:
"""Create a new file with content.
Returns WriteResult with files_update to update LangGraph state.
"""새 파일을 생성합니다.
LangGraph state 업데이트를 위해 `files_update`가 포함된 `WriteResult`를 반환합니다.
"""
files = self.runtime.state.get("files", {})
@@ -145,8 +146,9 @@ class StateBackend(BackendProtocol):
new_string: str,
replace_all: bool = False,
) -> EditResult:
"""Edit a file by replacing string occurrences.
Returns EditResult with files_update and occurrences.
"""파일 내 문자열을 치환하여 편집합니다.
`files_update` 및 치환 횟수(`occurrences`)가 포함된 `EditResult`를 반환합니다.
"""
files = self.runtime.state.get("files", {})
file_data = files.get(file_path)
@@ -174,7 +176,7 @@ class StateBackend(BackendProtocol):
return grep_matches_from_files(files, pattern, path, glob)
def glob_info(self, pattern: str, path: str = "/") -> list[FileInfo]:
"""Get FileInfo for files matching glob pattern."""
"""Glob 패턴에 매칭되는 파일에 대한 FileInfo를 반환합니다."""
files = self.runtime.state.get("files", {})
result = _glob_search_files(files, pattern, path)
if result == "No files found":
@@ -195,28 +197,14 @@ class StateBackend(BackendProtocol):
return infos
def upload_files(self, files: list[tuple[str, bytes]]) -> list[FileUploadResponse]:
"""Upload multiple files to state.
Args:
files: List of (path, content) tuples to upload
Returns:
List of FileUploadResponse objects, one per input file
"""
"""여러 파일을 state로 업로드합니다."""
raise NotImplementedError(
"StateBackend does not support upload_files yet. You can upload files "
"directly by passing them in invoke if you're storing files in the memory."
)
def download_files(self, paths: list[str]) -> list[FileDownloadResponse]:
"""Download multiple files from state.
Args:
paths: List of file paths to download
Returns:
List of FileDownloadResponse objects, one per input path
"""
"""여러 파일을 state에서 다운로드합니다."""
state_files = self.runtime.state.get("files", {})
responses: list[FileDownloadResponse] = []
@@ -227,7 +215,7 @@ class StateBackend(BackendProtocol):
responses.append(FileDownloadResponse(path=path, content=None, error="file_not_found"))
continue
# Convert file data to bytes
# state의 FileData bytes로 변환
content_str = file_data_to_string(file_data)
content_bytes = content_str.encode("utf-8")

View File

@@ -1,4 +1,4 @@
"""StoreBackend: Adapter for LangGraph's BaseStore (persistent, cross-thread)."""
"""StoreBackend: LangGraph `BaseStore` 어댑터(영구 저장, 스레드 간 공유)."""
from typing import Any
@@ -26,31 +26,20 @@ from deepagents.backends.utils import (
class StoreBackend(BackendProtocol):
"""Backend that stores files in LangGraph's BaseStore (persistent).
"""LangGraph `BaseStore`에 파일을 저장하는 백엔드입니다(영구 저장).
Uses LangGraph's Store for persistent, cross-conversation storage.
Files are organized via namespaces and persist across all threads.
LangGraph Store를 이용해 대화 스레드를 넘어서는 영구 저장을 제공합니다.
파일은 namespace로 조직되며 모든 스레드에서 지속됩니다.
The namespace can include an optional assistant_id for multi-agent isolation.
멀티 에이전트 격리를 위해 namespace에 `assistant_id`를 포함할 수도 있습니다.
"""
def __init__(self, runtime: "ToolRuntime"):
"""Initialize StoreBackend with runtime.
Args:
runtime: The ToolRuntime instance providing store access and configuration.
"""
"""`ToolRuntime`으로 StoreBackend를 초기화합니다."""
self.runtime = runtime
def _get_store(self) -> BaseStore:
"""Get the store instance.
Returns:
BaseStore instance from the runtime.
Raises:
ValueError: If no store is available in the runtime.
"""
"""runtime으로부터 store 인스턴스를 가져옵니다."""
store = self.runtime.store
if store is None:
msg = "Store is required but not available in runtime"
@@ -58,7 +47,7 @@ class StoreBackend(BackendProtocol):
return store
def _get_namespace(self) -> tuple[str, ...]:
"""Get the namespace for store operations.
"""Store 작업에 사용할 namespace를 구합니다.
Preference order:
1) Use `self.runtime.config` if present (tests pass this explicitly).
@@ -95,17 +84,7 @@ class StoreBackend(BackendProtocol):
return (namespace,)
def _convert_store_item_to_file_data(self, store_item: Item) -> dict[str, Any]:
"""Convert a store Item to FileData format.
Args:
store_item: The store Item containing file data.
Returns:
FileData dict with content, created_at, and modified_at fields.
Raises:
ValueError: If required fields are missing or have incorrect types.
"""
"""Store `Item`을 FileData 형식 딕셔너리로 변환합니다."""
if "content" not in store_item.value or not isinstance(store_item.value["content"], list):
msg = f"Store item does not contain valid content field. Got: {store_item.value.keys()}"
raise ValueError(msg)
@@ -122,14 +101,7 @@ class StoreBackend(BackendProtocol):
}
def _convert_file_data_to_store_value(self, file_data: dict[str, Any]) -> dict[str, Any]:
"""Convert FileData to a dict suitable for store.put().
Args:
file_data: The FileData to convert.
Returns:
Dictionary with content, created_at, and modified_at fields.
"""
"""FileData를 `store.put()`에 넣기 좋은 형태로 변환합니다."""
return {
"content": file_data["content"],
"created_at": file_data["created_at"],
@@ -145,7 +117,7 @@ class StoreBackend(BackendProtocol):
filter: dict[str, Any] | None = None,
page_size: int = 100,
) -> list[Item]:
"""Search store with automatic pagination to retrieve all results.
"""Store 검색을 pagination으로 반복하여 모든 결과를 수집합니다.
Args:
store: The store to search.
@@ -184,7 +156,7 @@ class StoreBackend(BackendProtocol):
return all_items
def ls_info(self, path: str) -> list[FileInfo]:
"""List files and directories in the specified directory (non-recursive).
"""지정한 디렉토리 바로 아래의 파일/폴더를 나열합니다(비재귀).
Args:
path: Absolute path to directory.
@@ -196,31 +168,31 @@ class StoreBackend(BackendProtocol):
store = self._get_store()
namespace = self._get_namespace()
# Retrieve all items and filter by path prefix locally to avoid
# coupling to store-specific filter semantics
# store별 filter semantics에 결합되지 않도록 전체 아이템을 가져온 뒤,
# path prefix 필터링은 로컬에서 수행합니다.
items = self._search_store_paginated(store, namespace)
infos: list[FileInfo] = []
subdirs: set[str] = set()
# Normalize path to have trailing slash for proper prefix matching
# prefix 매칭을 위해 trailing slash를 갖는 형태로 정규화
normalized_path = path if path.endswith("/") else path + "/"
for item in items:
# Check if file is in the specified directory or a subdirectory
# 지정 디렉토리(또는 하위 디렉토리)에 속하는지 확인
if not str(item.key).startswith(normalized_path):
continue
# Get the relative path after the directory
# 디렉토리 이후의 상대 경로
relative = str(item.key)[len(normalized_path) :]
# If relative path contains '/', it's in a subdirectory
# 상대 경로에 `/`가 있으면 하위 디렉토리입니다.
if "/" in relative:
# Extract the immediate subdirectory name
# 즉시 하위 디렉토리 이름만 추출
subdir_name = relative.split("/")[0]
subdirs.add(normalized_path + subdir_name + "/")
continue
# This is a file directly in the current directory
# 현재 디렉토리 바로 아래의 파일
try:
fd = self._convert_store_item_to_file_data(item)
except ValueError:
@@ -235,7 +207,7 @@ class StoreBackend(BackendProtocol):
}
)
# Add directories to the results
# 디렉토리 항목을 결과에 추가
for subdir in sorted(subdirs):
infos.append(
{
@@ -255,7 +227,7 @@ class StoreBackend(BackendProtocol):
offset: int = 0,
limit: int = 2000,
) -> str:
"""Read file content with line numbers.
"""파일을 읽어 라인 번호가 포함된 문자열로 반환합니다.
Args:
file_path: Absolute file path.
@@ -284,18 +256,19 @@ class StoreBackend(BackendProtocol):
file_path: str,
content: str,
) -> WriteResult:
"""Create a new file with content.
Returns WriteResult. External storage sets files_update=None.
"""새 파일을 생성하고 내용을 씁니다.
`WriteResult`를 반환합니다. 외부 스토리지 백엔드는 `files_update=None`을 사용합니다.
"""
store = self._get_store()
namespace = self._get_namespace()
# Check if file exists
# 파일 존재 여부 확인
existing = store.get(namespace, file_path)
if existing is not None:
return WriteResult(error=f"Cannot write to {file_path} because it already exists. Read and then make an edit, or write to a new path.")
# Create new file
# 새 파일 생성
file_data = create_file_data(content)
store_value = self._convert_file_data_to_store_value(file_data)
store.put(namespace, file_path, store_value)
@@ -308,13 +281,14 @@ class StoreBackend(BackendProtocol):
new_string: str,
replace_all: bool = False,
) -> EditResult:
"""Edit a file by replacing string occurrences.
Returns EditResult. External storage sets files_update=None.
"""파일 내 문자열을 치환하여 편집합니다.
`EditResult`를 반환합니다. 외부 스토리지 백엔드는 `files_update=None`을 사용합니다.
"""
store = self._get_store()
namespace = self._get_namespace()
# Get existing file
# 기존 파일 조회
item = store.get(namespace, file_path)
if item is None:
return EditResult(error=f"Error: File '{file_path}' not found")
@@ -333,12 +307,12 @@ class StoreBackend(BackendProtocol):
new_content, occurrences = result
new_file_data = update_file_data(file_data, new_content)
# Update file in store
# store에 업데이트 반영
store_value = self._convert_file_data_to_store_value(new_file_data)
store.put(namespace, file_path, store_value)
return EditResult(path=file_path, files_update=None, occurrences=int(occurrences))
# Removed legacy grep() convenience to keep lean surface
# API 표면을 간결하게 유지하기 위해 legacy grep() convenience는 제거됨
def grep_raw(
self,
@@ -386,7 +360,7 @@ class StoreBackend(BackendProtocol):
return infos
def upload_files(self, files: list[tuple[str, bytes]]) -> list[FileUploadResponse]:
"""Upload multiple files to the store.
"""여러 파일을 store에 업로드합니다.
Args:
files: List of (path, content) tuples where content is bytes.
@@ -401,18 +375,18 @@ class StoreBackend(BackendProtocol):
for path, content in files:
content_str = content.decode("utf-8")
# Create file data
# 파일 데이터 생성
file_data = create_file_data(content_str)
store_value = self._convert_file_data_to_store_value(file_data)
# Store the file
# 파일 저장
store.put(namespace, path, store_value)
responses.append(FileUploadResponse(path=path, error=None))
return responses
def download_files(self, paths: list[str]) -> list[FileDownloadResponse]:
"""Download multiple files from the store.
"""여러 파일을 store에서 다운로드합니다.
Args:
paths: List of file paths to download.
@@ -433,7 +407,7 @@ class StoreBackend(BackendProtocol):
continue
file_data = self._convert_store_item_to_file_data(item)
# Convert file data to bytes
# FileData bytes로 변환
content_str = file_data_to_string(file_data)
content_bytes = content_str.encode("utf-8")

View File

@@ -1,8 +1,9 @@
"""Shared utility functions for memory backend implementations.
"""메모리 백엔드 구현에서 공용으로 사용하는 유틸리티 함수 모음입니다.
This module contains both user-facing string formatters and structured
helpers used by backends and the composite router. Structured helpers
enable composition without fragile string parsing.
이 모듈에는 (1) 사용자/에이전트에게 보여줄 문자열 포매터와 (2) 백엔드 및
Composite 라우터에서 사용하는 구조화된 헬퍼가 함께 들어 있습니다.
구조화 헬퍼를 사용하면 문자열 파싱에 의존하지 않고도 조합(composition)이
가능해집니다.
"""
import re
@@ -21,15 +22,16 @@ LINE_NUMBER_WIDTH = 6
TOOL_RESULT_TOKEN_LIMIT = 20000 # Same threshold as eviction
TRUNCATION_GUIDANCE = "... [results truncated, try being more specific with your parameters]"
# Re-export protocol types for backwards compatibility
# 하위 호환성을 위해 protocol 타입을 재노출(re-export)합니다.
FileInfo = _FileInfo
GrepMatch = _GrepMatch
def sanitize_tool_call_id(tool_call_id: str) -> str:
r"""Sanitize tool_call_id to prevent path traversal and separator issues.
r"""`tool_call_id`를 안전하게 정규화합니다.
Replaces dangerous characters (., /, \) with underscores.
경로 탐색(path traversal)이나 구분자 관련 문제를 피하기 위해 위험한 문자(`.`, `/`, `\`)를
밑줄(`_`)로 치환합니다.
"""
sanitized = tool_call_id.replace(".", "_").replace("/", "_").replace("\\", "_")
return sanitized
@@ -39,16 +41,16 @@ def format_content_with_line_numbers(
content: str | list[str],
start_line: int = 1,
) -> str:
"""Format file content with line numbers (cat -n style).
"""파일 내용을 라인 번호와 함께 포맷팅합니다(`cat -n` 스타일).
Chunks lines longer than MAX_LINE_LENGTH with continuation markers (e.g., 5.1, 5.2).
`MAX_LINE_LENGTH`를 초과하는 라인은 연속 마커(예: `5.1`, `5.2`)를 붙여 여러 줄로 분할합니다.
Args:
content: File content as string or list of lines
start_line: Starting line number (default: 1)
content: 문자열 또는 라인 리스트 형태의 파일 내용
start_line: 시작 라인 번호(기본값: 1)
Returns:
Formatted content with line numbers and continuation markers
라인 번호와 연속 마커가 포함된 포맷 문자열
"""
if isinstance(content, str):
lines = content.split("\n")
@@ -64,17 +66,17 @@ def format_content_with_line_numbers(
if len(line) <= MAX_LINE_LENGTH:
result_lines.append(f"{line_num:{LINE_NUMBER_WIDTH}d}\t{line}")
else:
# Split long line into chunks with continuation markers
# 긴 라인을 여러 조각으로 분할하고 연속 마커를 부여합니다.
num_chunks = (len(line) + MAX_LINE_LENGTH - 1) // MAX_LINE_LENGTH
for chunk_idx in range(num_chunks):
start = chunk_idx * MAX_LINE_LENGTH
end = min(start + MAX_LINE_LENGTH, len(line))
chunk = line[start:end]
if chunk_idx == 0:
# First chunk: use normal line number
# 첫 번째 조각: 일반 라인 번호 사용
result_lines.append(f"{line_num:{LINE_NUMBER_WIDTH}d}\t{chunk}")
else:
# Continuation chunks: use decimal notation (e.g., 5.1, 5.2)
# 후속 조각: 소수 표기(예: 5.1, 5.2)
continuation_marker = f"{line_num}.{chunk_idx}"
result_lines.append(f"{continuation_marker:>{LINE_NUMBER_WIDTH}}\t{chunk}")
@@ -82,13 +84,13 @@ def format_content_with_line_numbers(
def check_empty_content(content: str) -> str | None:
"""Check if content is empty and return warning message.
"""콘텐츠가 비어 있는지 확인하고, 비어 있으면 경고 메시지를 반환합니다.
Args:
content: Content to check
content: 확인할 콘텐츠
Returns:
Warning message if empty, None otherwise
비어 있으면 경고 메시지, 아니면 `None`
"""
if not content or content.strip() == "":
return EMPTY_CONTENT_WARNING
@@ -96,26 +98,26 @@ def check_empty_content(content: str) -> str | None:
def file_data_to_string(file_data: dict[str, Any]) -> str:
"""Convert FileData to plain string content.
"""FileData 딕셔너리를 일반 문자열 콘텐츠로 변환합니다.
Args:
file_data: FileData dict with 'content' key
file_data: `'content'` 키를 포함한 FileData 딕셔너리
Returns:
Content as string with lines joined by newlines
줄바꿈으로 합쳐진 문자열 콘텐츠
"""
return "\n".join(file_data["content"])
def create_file_data(content: str, created_at: str | None = None) -> dict[str, Any]:
"""Create a FileData object with timestamps.
"""타임스탬프를 포함한 FileData 딕셔너리를 생성합니다.
Args:
content: File content as string
created_at: Optional creation timestamp (ISO format)
content: 파일 내용(문자열)
created_at: 생성 시각(ISO 형식) 오버라이드(선택)
Returns:
FileData dict with content and timestamps
content/created_at/modified_at를 포함한 FileData 딕셔너리
"""
lines = content.split("\n") if isinstance(content, str) else content
now = datetime.now(UTC).isoformat()
@@ -128,14 +130,14 @@ def create_file_data(content: str, created_at: str | None = None) -> dict[str, A
def update_file_data(file_data: dict[str, Any], content: str) -> dict[str, Any]:
"""Update FileData with new content, preserving creation timestamp.
"""기존 FileData의 생성 시각을 유지하면서 내용을 업데이트합니다.
Args:
file_data: Existing FileData dict
content: New content as string
file_data: 기존 FileData 딕셔너리
content: 새 콘텐츠(문자열)
Returns:
Updated FileData dict
업데이트된 FileData 딕셔너리
"""
lines = content.split("\n") if isinstance(content, str) else content
now = datetime.now(UTC).isoformat()
@@ -152,15 +154,15 @@ def format_read_response(
offset: int,
limit: int,
) -> str:
"""Format file data for read response with line numbers.
"""`read` 응답을 라인 번호와 함께 포맷팅합니다.
Args:
file_data: FileData dict
offset: Line offset (0-indexed)
limit: Maximum number of lines
file_data: FileData 딕셔너리
offset: 라인 오프셋(0-index)
limit: 최대 라인 수
Returns:
Formatted content or error message
포맷된 콘텐츠 또는 오류 메시지
"""
content = file_data_to_string(file_data)
empty_msg = check_empty_content(content)
@@ -184,16 +186,16 @@ def perform_string_replacement(
new_string: str,
replace_all: bool,
) -> tuple[str, int] | str:
"""Perform string replacement with occurrence validation.
"""문자열 치환을 수행하고, 치환 대상 문자열의 출현 횟수를 검증합니다.
Args:
content: Original content
old_string: String to replace
new_string: Replacement string
replace_all: Whether to replace all occurrences
content: 원본 콘텐츠
old_string: 치환할 문자열
new_string: 대체 문자열
replace_all: 모든 출현을 치환할지 여부
Returns:
Tuple of (new_content, occurrences) on success, or error message string
성공 시 `(new_content, occurrences)` 튜플, 실패 시 오류 메시지 문자열
"""
occurrences = content.count(old_string)
@@ -208,7 +210,7 @@ def perform_string_replacement(
def truncate_if_too_long(result: list[str] | str) -> list[str] | str:
"""Truncate list or string result if it exceeds token limit (rough estimate: 4 chars/token)."""
"""토큰 제한을 초과하는 결과를 잘라냅니다(대략 4 chars/token 기준)."""
if isinstance(result, list):
total_chars = sum(len(item) for item in result)
if total_chars > TOOL_RESULT_TOKEN_LIMIT * 4:
@@ -221,16 +223,16 @@ def truncate_if_too_long(result: list[str] | str) -> list[str] | str:
def _validate_path(path: str | None) -> str:
"""Validate and normalize a path.
"""경로를 검증하고 정규화합니다.
Args:
path: Path to validate
path: 검증할 경로
Returns:
Normalized path starting with /
`/`로 시작하고 `/`로 끝나는 정규화된 경로
Raises:
ValueError: If path is invalid
ValueError: 경로가 유효하지 않은 경우
"""
path = path or "/"
if not path or path.strip() == "":
@@ -249,7 +251,7 @@ def _glob_search_files(
pattern: str,
path: str = "/",
) -> str:
"""Search files dict for paths matching glob pattern.
"""in-memory 파일 맵에서 glob 패턴에 매칭되는 경로를 찾습니다.
Args:
files: Dictionary of file paths to FileData.
@@ -274,10 +276,9 @@ def _glob_search_files(
filtered = {fp: fd for fp, fd in files.items() if fp.startswith(normalized_path)}
# Respect standard glob semantics:
# - Patterns without path separators (e.g., "*.py") match only in the current
# directory (non-recursive) relative to `path`.
# - Use "**" explicitly for recursive matching.
# 표준 glob semantics를 따릅니다.
# - path separator가 없는 패턴(예: "*.py")은 `path` 기준 현재 디렉토리(비재귀)만 매칭합니다.
# - 재귀 매칭이 필요하면 "**"를 명시적으로 사용해야 합니다.
effective_pattern = pattern
matches = []
@@ -301,7 +302,7 @@ def _format_grep_results(
results: dict[str, list[tuple[int, str]]],
output_mode: Literal["files_with_matches", "content", "count"],
) -> str:
"""Format grep search results based on output mode.
"""Output mode에 따라 grep 검색 결과를 포맷팅합니다.
Args:
results: Dictionary mapping file paths to list of (line_num, line_content) tuples
@@ -333,7 +334,7 @@ def _grep_search_files(
glob: str | None = None,
output_mode: Literal["files_with_matches", "content", "count"] = "files_with_matches",
) -> str:
"""Search file contents for regex pattern.
"""파일 내용에서 정규식 패턴을 검색합니다.
Args:
files: Dictionary of file paths to FileData.
@@ -380,7 +381,7 @@ def _grep_search_files(
return _format_grep_results(results, output_mode)
# -------- Structured helpers for composition --------
# -------- 조합(composition)을 위한 구조화 헬퍼 --------
def grep_matches_from_files(
@@ -389,11 +390,11 @@ def grep_matches_from_files(
path: str | None = None,
glob: str | None = None,
) -> list[GrepMatch] | str:
"""Return structured grep matches from an in-memory files mapping.
"""in-memory 파일 맵에서 구조화된 grep 매칭을 반환합니다.
Returns a list of GrepMatch on success, or a string for invalid inputs
(e.g., invalid regex). We deliberately do not raise here to keep backends
non-throwing in tool contexts and preserve user-facing error messages.
성공 시 `list[GrepMatch]`를, 입력이 유효하지 않은 경우(예: 잘못된 정규식)는 오류 문자열을 반환합니다.
백엔드가 도구(tool) 컨텍스트에서 예외를 던지지 않도록 하고, 사용자/에이전트에게 보여줄 오류 메시지를
유지하기 위해 의도적으로 raise 하지 않습니다.
"""
try:
regex = re.compile(pattern)
@@ -419,7 +420,7 @@ def grep_matches_from_files(
def build_grep_results_dict(matches: list[GrepMatch]) -> dict[str, list[tuple[int, str]]]:
"""Group structured matches into the legacy dict form used by formatters."""
"""구조화 매칭을 기존(formatter) 호환 dict 형태로 그룹화합니다."""
grouped: dict[str, list[tuple[int, str]]] = {}
for m in matches:
grouped.setdefault(m["path"], []).append((m["line"], m["text"]))
@@ -430,7 +431,7 @@ def format_grep_matches(
matches: list[GrepMatch],
output_mode: Literal["files_with_matches", "content", "count"],
) -> str:
"""Format structured grep matches using existing formatting logic."""
"""기존 포맷팅 로직을 이용해 구조화 grep 매칭을 문자열로 포맷팅합니다."""
if not matches:
return "No matches found"
return _format_grep_results(build_grep_results_dict(matches), output_mode)

View File

@@ -1,4 +1,4 @@
"""Deepagents come with planning, filesystem, and subagents."""
"""Deepagents는 계획, 파일 시스템, 서브에이전트 기능을 제공합니다."""
from collections.abc import Callable, Sequence
from typing import Any
@@ -30,10 +30,10 @@ BASE_AGENT_PROMPT = "In order to complete the objective that the user asks of yo
def get_default_model() -> ChatAnthropic:
"""Get the default model for deep agents.
"""Deep agents의 기본 모델을 가져옵니다.
Returns:
`ChatAnthropic` instance configured with Claude Sonnet 4.5.
Claude Sonnet 4.5로 구성된 `ChatAnthropic` 인스턴스.
"""
return ChatAnthropic(
model_name="claude-sonnet-4-5-20250929",
@@ -60,56 +60,53 @@ def create_deep_agent(
name: str | None = None,
cache: BaseCache | None = None,
) -> CompiledStateGraph:
"""Create a deep agent.
"""DeepAgent를 생성합니다.
This agent will by default have access to a tool to write todos (`write_todos`),
seven file and execution tools: `ls`, `read_file`, `write_file`, `edit_file`, `glob`, `grep`, `execute`,
and a tool to call subagents.
이 에이전트는 기본적으로 아래 기능(도구/미들웨어)을 포함합니다.
- todo 작성 도구: `write_todos`
- 파일/실행 도구: `ls`, `read_file`, `write_file`, `edit_file`, `glob`, `grep`, `execute`
- 서브에이전트 호출 도구
The `execute` tool allows running shell commands if the backend implements `SandboxBackendProtocol`.
For non-sandbox backends, the `execute` tool will return an error message.
`execute` 도구는 backend가 `SandboxBackendProtocol`을 구현할 때 셸 커맨드를 실행할 수 있습니다.
샌드박스가 아닌 backend에서는 `execute`가 오류 메시지를 반환합니다.
Args:
model: The model to use. Defaults to `claude-sonnet-4-5-20250929`.
tools: The tools the agent should have access to.
system_prompt: The additional instructions the agent should have. Will go in
the system prompt.
middleware: Additional middleware to apply after standard middleware.
subagents: The subagents to use.
Each subagent should be a `dict` with the following keys:
model: 사용할 모델. 기본값은 `claude-sonnet-4-5-20250929`.
tools: 에이전트에 추가로 제공할 도구 목록.
system_prompt: 에이전트에 추가로 주입할 지침. system prompt에 포함됩니다.
middleware: 표준 미들웨어 뒤에 추가로 적용할 미들웨어 목록.
subagents: 사용할 서브에이전트 정의 목록.
각 서브에이전트는 아래 키를 가진 `dict` 형태입니다.
- `name`
- `description` (used by the main agent to decide whether to call the sub agent)
- `prompt` (used as the system prompt in the subagent)
- `description` (메인 에이전트가 어떤 서브에이전트를 호출할지 결정할 때 사용)
- `prompt` (서브에이전트의 system prompt로 사용)
- (optional) `tools`
- (optional) `model` (either a `LanguageModelLike` instance or `dict` settings)
- (optional) `middleware` (list of `AgentMiddleware`)
skills: Optional list of skill source paths (e.g., `["/skills/user/", "/skills/project/"]`).
- (optional) `model` (`LanguageModelLike` 인스턴스 또는 설정 `dict`)
- (optional) `middleware` (`AgentMiddleware` 리스트)
skills: 스킬 소스 경로 목록(예: `["/skills/user/", "/skills/project/"]`) (선택).
Paths must be specified using POSIX conventions (forward slashes) and are relative
to the backend's root. When using `StateBackend` (default), provide skill files via
`invoke(files={...})`. With `FilesystemBackend`, skills are loaded from disk relative
to the backend's `root_dir`. Later sources override earlier ones for skills with the
same name (last one wins).
memory: Optional list of memory file paths (`AGENTS.md` files) to load
(e.g., `["/memory/AGENTS.md"]`). Display names are automatically derived from paths.
Memory is loaded at agent startup and added into the system prompt.
response_format: A structured output response format to use for the agent.
context_schema: The schema of the deep agent.
checkpointer: Optional `Checkpointer` for persisting agent state between runs.
store: Optional store for persistent storage (required if backend uses `StoreBackend`).
backend: Optional backend for file storage and execution.
경로는 POSIX 형식(슬래시 `/`)으로 지정하며 backend root 기준 상대 경로입니다.
`StateBackend`(기본값)를 사용할 때는 `invoke(files={...})`로 파일을 제공해야 합니다.
`FilesystemBackend`에서는 backend의 `root_dir` 기준으로 디스크에서 스킬을 로드합니다.
같은 이름의 스킬이 중복될 경우 뒤에 오는 소스가 우선합니다(last one wins).
memory: 로드할 메모리 파일(AGENTS.md) 경로 목록(예: `["/memory/AGENTS.md"]`) (선택).
표시 이름은 경로에서 자동 유도되며, 에이전트 시작 시 로드되어 system prompt에 포함됩니다.
response_format: 구조화 출력 응답 포맷(선택).
context_schema: DeepAgent의 컨텍스트 스키마(선택).
checkpointer: 실행 간 state를 저장하기 위한 `Checkpointer`(선택).
store: 영구 저장을 위한 store(선택). backend가 `StoreBackend`를 사용할 경우 필요합니다.
backend: 파일 저장/실행을 위한 backend(선택).
Pass either a `Backend` instance or a callable factory like `lambda rt: StateBackend(rt)`.
For execution support, use a backend that implements `SandboxBackendProtocol`.
interrupt_on: Mapping of tool names to interrupt configs.
debug: Whether to enable debug mode. Passed through to `create_agent`.
name: The name of the agent. Passed through to `create_agent`.
cache: The cache to use for the agent. Passed through to `create_agent`.
`Backend` 인스턴스 또는 `lambda rt: StateBackend(rt)` 같은 팩토리 함수를 전달할 수 있습니다.
실행 지원이 필요하면 `SandboxBackendProtocol`을 구현한 backend를 사용하세요.
interrupt_on: 도구 이름 → interrupt 설정 매핑(선택).
debug: debug 모드 활성화 여부. `create_agent`로 전달됩니다.
name: 에이전트 이름. `create_agent`로 전달됩니다.
cache: 캐시 인스턴스. `create_agent`로 전달됩니다.
Returns:
A configured deep agent.
설정된(compiled) deep agent 그래프.
"""
if model is None:
model = get_default_model()
@@ -128,7 +125,7 @@ def create_deep_agent(
trigger = ("tokens", 170000)
keep = ("messages", 6)
# Build middleware stack for subagents (includes skills if provided)
# 서브에이전트용 미들웨어 스택 구성(skills가 있으면 포함)
subagent_middleware: list[AgentMiddleware] = [
TodoListMiddleware(),
]
@@ -151,7 +148,7 @@ def create_deep_agent(
]
)
# Build main agent middleware stack
# 메인 에이전트 미들웨어 스택 구성
deepagent_middleware: list[AgentMiddleware] = [
TodoListMiddleware(),
]

View File

@@ -1,4 +1,4 @@
"""Middleware for the DeepAgent."""
"""DeepAgent에 사용되는 미들웨어 모듈입니다."""
from deepagents.middleware.filesystem import FilesystemMiddleware
from deepagents.middleware.memory import MemoryMiddleware

View File

@@ -1,4 +1,4 @@
"""Middleware for providing filesystem tools to an agent."""
"""에이전트에 파일 시스템 도구를 제공하는 미들웨어입니다."""
# ruff: noqa: E501
import os
@@ -44,24 +44,24 @@ DEFAULT_READ_LIMIT = 500
class FileData(TypedDict):
"""Data structure for storing file contents with metadata."""
"""파일 내용을 메타데이터와 함께 저장하기 위한 데이터 구조입니다."""
content: list[str]
"""Lines of the file."""
"""파일의 각 라인."""
created_at: str
"""ISO 8601 timestamp of file creation."""
"""파일 생성 시각(ISO 8601)."""
modified_at: str
"""ISO 8601 timestamp of last modification."""
"""파일 마지막 수정 시각(ISO 8601)."""
def _file_data_reducer(left: dict[str, FileData] | None, right: dict[str, FileData | None]) -> dict[str, FileData]:
"""Merge file updates with support for deletions.
"""파일 업데이트를 병합하며, 삭제를 지원합니다.
This reducer enables file deletion by treating `None` values in the right
dictionary as deletion markers. It's designed to work with LangGraph's
state management where annotated reducers control how state updates merge.
오른쪽 딕셔너리의 값이 `None`인 엔트리를 “삭제 마커”로 취급해 삭제를 구현합니다.
LangGraph의 state 관리에서 annotated reducer가 state 업데이트 병합 방식을 제어한다는
전제에 맞춰 설계되었습니다.
Args:
left: Existing files dictionary. May be `None` during initialization.
@@ -93,15 +93,13 @@ def _file_data_reducer(left: dict[str, FileData] | None, right: dict[str, FileDa
def _validate_path(path: str, *, allowed_prefixes: Sequence[str] | None = None) -> str:
r"""Validate and normalize file path for security.
r"""보안 관점에서 파일 경로를 검증하고 정규화합니다.
Ensures paths are safe to use by preventing directory traversal attacks
and enforcing consistent formatting. All paths are normalized to use
forward slashes and start with a leading slash.
디렉토리 트래버설 공격을 방지하고, 일관된 포맷을 강제하여 안전한 경로만 사용하도록 합니다.
모든 경로는 `/`로 시작하며, 경로 구분자는 forward slash(`/`)로 정규화됩니다.
This function is designed for virtual filesystem paths and rejects
Windows absolute paths (e.g., C:/..., F:/...) to maintain consistency
and prevent path format ambiguity.
이 함수는 “가상 파일시스템 경로(virtual paths)”를 대상으로 설계되었으며,
경로 형식의 모호함을 피하기 위해 Windows 절대 경로(예: `C:/...`, `F:/...`)는 거부합니다.
Args:
path: The path to validate and normalize.
@@ -130,8 +128,8 @@ def _validate_path(path: str, *, allowed_prefixes: Sequence[str] | None = None)
msg = f"Path traversal not allowed: {path}"
raise ValueError(msg)
# Reject Windows absolute paths (e.g., C:\..., D:/...)
# This maintains consistency in virtual filesystem paths
# Windows 절대 경로(예: C:\..., D:/...)는 거부합니다.
# 가상 파일시스템 경로 포맷의 일관성을 유지하기 위함입니다.
if re.match(r"^[a-zA-Z]:", path):
msg = f"Windows absolute paths are not supported: {path}. Please use virtual paths starting with / (e.g., /workspace/file.txt)"
raise ValueError(msg)
@@ -150,10 +148,10 @@ def _validate_path(path: str, *, allowed_prefixes: Sequence[str] | None = None)
class FilesystemState(AgentState):
"""State for the filesystem middleware."""
"""FilesystemMiddleware의 state 스키마입니다."""
files: Annotated[NotRequired[dict[str, FileData]], _file_data_reducer]
"""Files in the filesystem."""
"""파일 시스템에 저장된 파일들."""
LIST_FILES_TOOL_DESCRIPTION = """Lists all files in the filesystem, filtering by directory.
@@ -296,14 +294,14 @@ Use this tool to run commands, scripts, tests, builds, and other shell operation
def _get_backend(backend: BACKEND_TYPES, runtime: ToolRuntime) -> BackendProtocol:
"""Get the resolved backend instance from backend or factory.
"""백엔드 또는 팩토리에서 해결된 백엔드 인스턴스를 가져옵니다.
Args:
backend: Backend instance or factory function.
runtime: The tool runtime context.
backend: 백엔드 인스턴스 또는 팩토리 함수.
runtime: 도구 런타임 컨텍스트.
Returns:
Resolved backend instance.
해결된 백엔드 인스턴스.
"""
if callable(backend):
return backend(runtime)
@@ -314,19 +312,19 @@ def _ls_tool_generator(
backend: BackendProtocol | Callable[[ToolRuntime], BackendProtocol],
custom_description: str | None = None,
) -> BaseTool:
"""Generate the ls (list files) tool.
"""파일 목록(ls) 도구를 생성합니다.
Args:
backend: Backend to use for file storage, or a factory function that takes runtime and returns a backend.
custom_description: Optional custom description for the tool.
backend: 파일 저장에 사용할 백엔드, 또는 런타임을 받아 백엔드를 반환하는 팩토리 함수.
custom_description: 도구의 선택적 사용자 정의 설명.
Returns:
Configured ls tool that lists files using the backend.
백엔드를 사용하여 파일을 나열하는 구성된 ls 도구.
"""
tool_description = custom_description or LIST_FILES_TOOL_DESCRIPTION
def sync_ls(runtime: ToolRuntime[None, FilesystemState], path: str) -> str:
"""Synchronous wrapper for ls tool."""
"""파일 목록(ls) 도구의 동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
validated_path = _validate_path(path)
infos = resolved_backend.ls_info(validated_path)
@@ -335,7 +333,7 @@ def _ls_tool_generator(
return str(result)
async def async_ls(runtime: ToolRuntime[None, FilesystemState], path: str) -> str:
"""Asynchronous wrapper for ls tool."""
"""파일 목록(ls) 도구의 비동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
validated_path = _validate_path(path)
infos = await resolved_backend.als_info(validated_path)
@@ -355,14 +353,14 @@ def _read_file_tool_generator(
backend: BackendProtocol | Callable[[ToolRuntime], BackendProtocol],
custom_description: str | None = None,
) -> BaseTool:
"""Generate the read_file tool.
"""`read_file` 도구를 생성합니다.
Args:
backend: Backend to use for file storage, or a factory function that takes runtime and returns a backend.
custom_description: Optional custom description for the tool.
backend: 파일 저장에 사용할 백엔드 또는 (runtime을 받아 백엔드를 반환하는) 팩토리 함수.
custom_description: 도구 설명을 커스텀할 때 사용(선택).
Returns:
Configured read_file tool that reads files using the backend.
backend를 통해 파일을 읽는 `read_file` 도구.
"""
tool_description = custom_description or READ_FILE_TOOL_DESCRIPTION
@@ -372,7 +370,7 @@ def _read_file_tool_generator(
offset: int = DEFAULT_READ_OFFSET,
limit: int = DEFAULT_READ_LIMIT,
) -> str:
"""Synchronous wrapper for read_file tool."""
"""`read_file` 도구의 동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
file_path = _validate_path(file_path)
return resolved_backend.read(file_path, offset=offset, limit=limit)
@@ -383,7 +381,7 @@ def _read_file_tool_generator(
offset: int = DEFAULT_READ_OFFSET,
limit: int = DEFAULT_READ_LIMIT,
) -> str:
"""Asynchronous wrapper for read_file tool."""
"""`read_file` 도구의 비동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
file_path = _validate_path(file_path)
return await resolved_backend.aread(file_path, offset=offset, limit=limit)
@@ -400,14 +398,14 @@ def _write_file_tool_generator(
backend: BackendProtocol | Callable[[ToolRuntime], BackendProtocol],
custom_description: str | None = None,
) -> BaseTool:
"""Generate the write_file tool.
"""`write_file` 도구를 생성합니다.
Args:
backend: Backend to use for file storage, or a factory function that takes runtime and returns a backend.
custom_description: Optional custom description for the tool.
backend: 파일 저장에 사용할 백엔드 또는 (runtime을 받아 백엔드를 반환하는) 팩토리 함수.
custom_description: 도구 설명을 커스텀할 때 사용(선택).
Returns:
Configured write_file tool that creates new files using the backend.
backend를 통해 새 파일을 생성하는 `write_file` 도구.
"""
tool_description = custom_description or WRITE_FILE_TOOL_DESCRIPTION
@@ -416,13 +414,13 @@ def _write_file_tool_generator(
content: str,
runtime: ToolRuntime[None, FilesystemState],
) -> Command | str:
"""Synchronous wrapper for write_file tool."""
"""`write_file` 도구의 동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
file_path = _validate_path(file_path)
res: WriteResult = resolved_backend.write(file_path, content)
if res.error:
return res.error
# If backend returns state update, wrap into Command with ToolMessage
# backend가 state 업데이트를 반환하면, ToolMessage와 함께 Command로 감쌉니다.
if res.files_update is not None:
return Command(
update={
@@ -442,13 +440,13 @@ def _write_file_tool_generator(
content: str,
runtime: ToolRuntime[None, FilesystemState],
) -> Command | str:
"""Asynchronous wrapper for write_file tool."""
"""`write_file` 도구의 비동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
file_path = _validate_path(file_path)
res: WriteResult = await resolved_backend.awrite(file_path, content)
if res.error:
return res.error
# If backend returns state update, wrap into Command with ToolMessage
# backend가 state 업데이트를 반환하면, ToolMessage와 함께 Command로 감쌉니다.
if res.files_update is not None:
return Command(
update={
@@ -475,14 +473,14 @@ def _edit_file_tool_generator(
backend: BackendProtocol | Callable[[ToolRuntime], BackendProtocol],
custom_description: str | None = None,
) -> BaseTool:
"""Generate the edit_file tool.
"""`edit_file` 도구를 생성합니다.
Args:
backend: Backend to use for file storage, or a factory function that takes runtime and returns a backend.
custom_description: Optional custom description for the tool.
backend: 파일 저장에 사용할 백엔드 또는 (runtime을 받아 백엔드를 반환하는) 팩토리 함수.
custom_description: 도구 설명을 커스텀할 때 사용(선택).
Returns:
Configured edit_file tool that performs string replacements in files using the backend.
backend를 통해 파일 내 문자열 치환을 수행하는 `edit_file` 도구.
"""
tool_description = custom_description or EDIT_FILE_TOOL_DESCRIPTION
@@ -494,7 +492,7 @@ def _edit_file_tool_generator(
*,
replace_all: bool = False,
) -> Command | str:
"""Synchronous wrapper for edit_file tool."""
"""`edit_file` 도구의 동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
file_path = _validate_path(file_path)
res: EditResult = resolved_backend.edit(file_path, old_string, new_string, replace_all=replace_all)
@@ -522,7 +520,7 @@ def _edit_file_tool_generator(
*,
replace_all: bool = False,
) -> Command | str:
"""Asynchronous wrapper for edit_file tool."""
"""`edit_file` 도구의 비동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
file_path = _validate_path(file_path)
res: EditResult = await resolved_backend.aedit(file_path, old_string, new_string, replace_all=replace_all)
@@ -554,19 +552,19 @@ def _glob_tool_generator(
backend: BackendProtocol | Callable[[ToolRuntime], BackendProtocol],
custom_description: str | None = None,
) -> BaseTool:
"""Generate the glob tool.
"""`glob` 도구를 생성합니다.
Args:
backend: Backend to use for file storage, or a factory function that takes runtime and returns a backend.
custom_description: Optional custom description for the tool.
backend: 파일 저장에 사용할 백엔드 또는 (runtime을 받아 백엔드를 반환하는) 팩토리 함수.
custom_description: 도구 설명을 커스텀할 때 사용(선택).
Returns:
Configured glob tool that finds files by pattern using the backend.
backend를 통해 패턴 매칭으로 파일을 찾는 `glob` 도구.
"""
tool_description = custom_description or GLOB_TOOL_DESCRIPTION
def sync_glob(pattern: str, runtime: ToolRuntime[None, FilesystemState], path: str = "/") -> str:
"""Synchronous wrapper for glob tool."""
"""`glob` 도구의 동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
infos = resolved_backend.glob_info(pattern, path=path)
paths = [fi.get("path", "") for fi in infos]
@@ -574,7 +572,7 @@ def _glob_tool_generator(
return str(result)
async def async_glob(pattern: str, runtime: ToolRuntime[None, FilesystemState], path: str = "/") -> str:
"""Asynchronous wrapper for glob tool."""
"""`glob` 도구의 비동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
infos = await resolved_backend.aglob_info(pattern, path=path)
paths = [fi.get("path", "") for fi in infos]
@@ -593,14 +591,14 @@ def _grep_tool_generator(
backend: BackendProtocol | Callable[[ToolRuntime], BackendProtocol],
custom_description: str | None = None,
) -> BaseTool:
"""Generate the grep tool.
"""`grep` 도구를 생성합니다.
Args:
backend: Backend to use for file storage, or a factory function that takes runtime and returns a backend.
custom_description: Optional custom description for the tool.
backend: 파일 저장에 사용할 백엔드 또는 (runtime을 받아 백엔드를 반환하는) 팩토리 함수.
custom_description: 도구 설명을 커스텀할 때 사용(선택).
Returns:
Configured grep tool that searches for patterns in files using the backend.
backend를 통해 파일 내 패턴 검색을 수행하는 `grep` 도구.
"""
tool_description = custom_description or GREP_TOOL_DESCRIPTION
@@ -611,7 +609,7 @@ def _grep_tool_generator(
glob: str | None = None,
output_mode: Literal["files_with_matches", "content", "count"] = "files_with_matches",
) -> str:
"""Synchronous wrapper for grep tool."""
"""`grep` 도구의 동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
raw = resolved_backend.grep_raw(pattern, path=path, glob=glob)
if isinstance(raw, str):
@@ -626,7 +624,7 @@ def _grep_tool_generator(
glob: str | None = None,
output_mode: Literal["files_with_matches", "content", "count"] = "files_with_matches",
) -> str:
"""Asynchronous wrapper for grep tool."""
"""`grep` 도구의 비동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
raw = await resolved_backend.agrep_raw(pattern, path=path, glob=glob)
if isinstance(raw, str):
@@ -643,25 +641,25 @@ def _grep_tool_generator(
def _supports_execution(backend: BackendProtocol) -> bool:
"""Check if a backend supports command execution.
"""backend가 커맨드 실행을 지원하는지 확인합니다.
For CompositeBackend, checks if the default backend supports execution.
For other backends, checks if they implement SandboxBackendProtocol.
- `CompositeBackend`인 경우: `default` backend가 실행을 지원하는지 확인합니다.
- 그 외의 경우: `SandboxBackendProtocol` 구현 여부로 판단합니다.
Args:
backend: The backend to check.
backend: 확인할 backend.
Returns:
True if the backend supports execution, False otherwise.
실행을 지원하면 `True`, 아니면 `False`.
"""
# Import here to avoid circular dependency
# 순환 의존(circular dependency)을 피하기 위해 여기서 import 합니다.
from deepagents.backends.composite import CompositeBackend
# For CompositeBackend, check the default backend
# CompositeBackend default backend가 실행을 지원하는지 확인합니다.
if isinstance(backend, CompositeBackend):
return isinstance(backend.default, SandboxBackendProtocol)
# For other backends, use isinstance check
# 그 외 backend isinstance로 실행 지원 여부를 판단합니다.
return isinstance(backend, SandboxBackendProtocol)
@@ -669,14 +667,14 @@ def _execute_tool_generator(
backend: BackendProtocol | Callable[[ToolRuntime], BackendProtocol],
custom_description: str | None = None,
) -> BaseTool:
"""Generate the execute tool for sandbox command execution.
"""샌드박스 커맨드 실행을 위한 `execute` 도구를 생성합니다.
Args:
backend: Backend to use for execution, or a factory function that takes runtime and returns a backend.
custom_description: Optional custom description for the tool.
backend: 실행에 사용할 backend 또는 (runtime을 받아 backend를 반환하는) 팩토리 함수.
custom_description: 도구 설명을 커스텀할 때 사용(선택).
Returns:
Configured execute tool that runs commands if backend supports SandboxBackendProtocol.
backend가 `SandboxBackendProtocol`을 지원할 때 커맨드를 실행하는 `execute` 도구.
"""
tool_description = custom_description or EXECUTE_TOOL_DESCRIPTION
@@ -684,10 +682,10 @@ def _execute_tool_generator(
command: str,
runtime: ToolRuntime[None, FilesystemState],
) -> str:
"""Synchronous wrapper for execute tool."""
"""`execute` 도구의 동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
# Runtime check - fail gracefully if not supported
# 런타임 체크: 지원하지 않으면 명시적인 오류 메시지로 종료
if not _supports_execution(resolved_backend):
return (
"Error: Execution not available. This agent's backend "
@@ -698,10 +696,10 @@ def _execute_tool_generator(
try:
result = resolved_backend.execute(command)
except NotImplementedError as e:
# Handle case where execute() exists but raises NotImplementedError
# execute()가 존재하지만 NotImplementedError를 던지는 케이스 처리
return f"Error: Execution not available. {e}"
# Format output for LLM consumption
# (LLM 입력으로 쓰기 좋게) 출력 포맷팅
parts = [result.output]
if result.exit_code is not None:
@@ -717,10 +715,10 @@ def _execute_tool_generator(
command: str,
runtime: ToolRuntime[None, FilesystemState],
) -> str:
"""Asynchronous wrapper for execute tool."""
"""`execute` 도구의 비동기 래퍼입니다."""
resolved_backend = _get_backend(backend, runtime)
# Runtime check - fail gracefully if not supported
# 런타임 체크: 지원하지 않으면 명시적인 오류 메시지로 종료
if not _supports_execution(resolved_backend):
return (
"Error: Execution not available. This agent's backend "
@@ -731,10 +729,10 @@ def _execute_tool_generator(
try:
result = await resolved_backend.aexecute(command)
except NotImplementedError as e:
# Handle case where execute() exists but raises NotImplementedError
# execute()가 존재하지만 NotImplementedError를 던지는 케이스 처리
return f"Error: Execution not available. {e}"
# Format output for LLM consumption
# (LLM 입력으로 쓰기 좋게) 출력 포맷팅
parts = [result.output]
if result.exit_code is not None:
@@ -769,14 +767,14 @@ def _get_filesystem_tools(
backend: BackendProtocol,
custom_tool_descriptions: dict[str, str] | None = None,
) -> list[BaseTool]:
"""Get filesystem and execution tools.
"""파일 시스템 도구(및 가능한 경우 실행 도구)를 구성해 반환합니다.
Args:
backend: Backend to use for file storage and optional execution, or a factory function that takes runtime and returns a backend.
custom_tool_descriptions: Optional custom descriptions for tools.
backend: 파일 저장(및 선택적 실행)에 사용할 backend.
custom_tool_descriptions: 도구별 커스텀 설명(선택).
Returns:
List of configured tools: ls, read_file, write_file, edit_file, glob, grep, execute.
구성된 도구 리스트: `ls`, `read_file`, `write_file`, `edit_file`, `glob`, `grep`, `execute`.
"""
if custom_tool_descriptions is None:
custom_tool_descriptions = {}
@@ -799,23 +797,22 @@ Here are the first 10 lines of the result:
class FilesystemMiddleware(AgentMiddleware):
"""Middleware for providing filesystem and optional execution tools to an agent.
"""에이전트에 파일 시스템 도구(및 선택적 실행 도구)를 제공하는 미들웨어입니다.
This middleware adds filesystem tools to the agent: ls, read_file, write_file,
edit_file, glob, and grep. Files can be stored using any backend that implements
the BackendProtocol.
이 미들웨어는 에이전트에 아래 도구들을 추가합니다.
- 파일 시스템 도구: `ls`, `read_file`, `write_file`, `edit_file`, `glob`, `grep`
- (선택) 실행 도구: `execute` (backend가 `SandboxBackendProtocol`을 구현할 때)
If the backend implements SandboxBackendProtocol, an execute tool is also added
for running shell commands.
파일 저장은 `BackendProtocol`을 구현하는 어떤 backend든 사용할 수 있습니다.
Args:
backend: Backend for file storage and optional execution. If not provided, defaults to StateBackend
(ephemeral storage in agent state). For persistent storage or hybrid setups,
use CompositeBackend with custom routes. For execution support, use a backend
that implements SandboxBackendProtocol.
system_prompt: Optional custom system prompt override.
custom_tool_descriptions: Optional custom tool descriptions override.
tool_token_limit_before_evict: Optional token limit before evicting a tool result to the filesystem.
backend: 파일 저장(및 선택적 실행)에 사용할 backend. 미지정 시 `StateBackend`를 기본값으로 사용합니다
(에이전트 state에 저장되는 일시적(ephemeral) 스토리지).
영구 저장이나 하이브리드 구성이 필요하면 route를 설정한 `CompositeBackend`를 사용하세요.
커맨드 실행이 필요하면 `SandboxBackendProtocol`을 구현한 backend를 사용해야 합니다.
system_prompt: 커스텀 system prompt 오버라이드(선택).
custom_tool_descriptions: 도구 설명 오버라이드(선택).
tool_token_limit_before_evict: tool 결과를 파일 시스템으로 축출(evict)하기 전 토큰 제한(선택).
Example:
```python
@@ -823,14 +820,14 @@ class FilesystemMiddleware(AgentMiddleware):
from deepagents.backends import StateBackend, StoreBackend, CompositeBackend
from langchain.agents import create_agent
# Ephemeral storage only (default, no execution)
# 일시적 저장만 사용(기본값, 실행 도구 없음)
agent = create_agent(middleware=[FilesystemMiddleware()])
# With hybrid storage (ephemeral + persistent /memories/)
# 하이브리드 저장(일시적 + /memories/ 영구 저장)
backend = CompositeBackend(default=StateBackend(), routes={"/memories/": StoreBackend()})
agent = create_agent(middleware=[FilesystemMiddleware(backend=backend)])
# With sandbox backend (supports execution)
# 샌드박스 backend(실행 도구 지원)
from my_sandbox import DockerSandboxBackend
sandbox = DockerSandboxBackend(container_id="my-container")
@@ -848,33 +845,33 @@ class FilesystemMiddleware(AgentMiddleware):
custom_tool_descriptions: dict[str, str] | None = None,
tool_token_limit_before_evict: int | None = 20000,
) -> None:
"""Initialize the filesystem middleware.
"""파일 시스템 미들웨어를 초기화합니다.
Args:
backend: Backend for file storage and optional execution, or a factory callable.
Defaults to StateBackend if not provided.
system_prompt: Optional custom system prompt override.
custom_tool_descriptions: Optional custom tool descriptions override.
tool_token_limit_before_evict: Optional token limit before evicting a tool result to the filesystem.
backend: 파일 저장/실행에 사용할 backend 또는 팩토리 callable.
미지정 시 `StateBackend`를 기본값으로 사용합니다.
system_prompt: 커스텀 system prompt 오버라이드(선택).
custom_tool_descriptions: 도구 설명 오버라이드(선택).
tool_token_limit_before_evict: tool 결과를 파일 시스템으로 축출(evict)하기 전 토큰 제한(선택).
"""
self.tool_token_limit_before_evict = tool_token_limit_before_evict
# Use provided backend or default to StateBackend factory
# backend가 주어지지 않으면 StateBackend 팩토리를 기본값으로 사용
self.backend = backend if backend is not None else (lambda rt: StateBackend(rt))
# Set system prompt (allow full override or None to generate dynamically)
# system prompt 설정(완전 오버라이드 또는 None이면 동적 생성)
self._custom_system_prompt = system_prompt
self.tools = _get_filesystem_tools(self.backend, custom_tool_descriptions)
def _get_backend(self, runtime: ToolRuntime) -> BackendProtocol:
"""Get the resolved backend instance from backend or factory.
"""백엔드 인스턴스/팩토리로부터 실제 백엔드를 해석(resolve)합니다.
Args:
runtime: The tool runtime context.
runtime: tool runtime 컨텍스트.
Returns:
Resolved backend instance.
해석된 backend 인스턴스.
"""
if callable(self.backend):
return self.backend(runtime)
@@ -885,38 +882,38 @@ class FilesystemMiddleware(AgentMiddleware):
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse],
) -> ModelResponse:
"""Update the system prompt and filter tools based on backend capabilities.
"""백엔드 capability에 따라 system prompt/도구 목록을 갱신합니다.
Args:
request: The model request being processed.
handler: The handler function to call with the modified request.
request: 처리 중인 모델 요청.
handler: 수정된 요청으로 호출할 핸들러 함수.
Returns:
The model response from the handler.
핸들러가 반환한 모델 응답.
"""
# Check if execute tool is present and if backend supports it
# execute 도구가 있는지, 그리고 backend가 실행을 지원하는지 확인
has_execute_tool = any((tool.name if hasattr(tool, "name") else tool.get("name")) == "execute" for tool in request.tools)
backend_supports_execution = False
if has_execute_tool:
# Resolve backend to check execution support
# 실행 지원 여부를 확인하기 위해 backend를 해석
backend = self._get_backend(request.runtime)
backend_supports_execution = _supports_execution(backend)
# If execute tool exists but backend doesn't support it, filter it out
# execute 도구가 있지만 backend가 지원하지 않으면 tools에서 제거
if not backend_supports_execution:
filtered_tools = [tool for tool in request.tools if (tool.name if hasattr(tool, "name") else tool.get("name")) != "execute"]
request = request.override(tools=filtered_tools)
has_execute_tool = False
# Use custom system prompt if provided, otherwise generate dynamically
# 커스텀 system prompt가 있으면 사용하고, 없으면 사용 가능한 도구 기준으로 동적 생성
if self._custom_system_prompt is not None:
system_prompt = self._custom_system_prompt
else:
# Build dynamic system prompt based on available tools
# 사용 가능한 도구에 따라 동적 system prompt 구성
prompt_parts = [FILESYSTEM_SYSTEM_PROMPT]
# Add execution instructions if execute tool is available
# execute 도구가 가능하면 실행 관련 지침 추가
if has_execute_tool and backend_supports_execution:
prompt_parts.append(EXECUTION_SYSTEM_PROMPT)
@@ -932,7 +929,7 @@ class FilesystemMiddleware(AgentMiddleware):
request: ModelRequest,
handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
) -> ModelResponse:
"""(async) Update the system prompt and filter tools based on backend capabilities.
"""(async) backend capability에 따라 system prompt/도구 목록을 갱신합니다.
Args:
request: The model request being processed.
@@ -979,7 +976,7 @@ class FilesystemMiddleware(AgentMiddleware):
message: ToolMessage,
resolved_backend: BackendProtocol,
) -> tuple[ToolMessage, dict[str, FileData] | None]:
"""Process a large ToolMessage by evicting its content to filesystem.
"""큰 ToolMessage를 처리하며, 콘텐츠를 파일 시스템으로 축출(evict)합니다.
Args:
message: The ToolMessage with large content to evict.
@@ -999,12 +996,12 @@ class FilesystemMiddleware(AgentMiddleware):
uncommon in tool results. For simplicity, all content is stringified and evicted.
The model can recover by reading the offloaded file from the backend.
"""
# Early exit if eviction not configured
# 축출 설정이 없으면 조기 종료
if not self.tool_token_limit_before_evict:
return message, None
# Convert content to string once for both size check and eviction
# Special case: single text block - extract text directly for readability
# 크기 체크와 축출을 위해 콘텐츠를 한 번만 문자열로 변환합니다.
# 특수 케이스: 단일 텍스트 블록이면 가독성을 위해 텍스트만 추출합니다.
if (
isinstance(message.content, list)
and len(message.content) == 1
@@ -1016,23 +1013,23 @@ class FilesystemMiddleware(AgentMiddleware):
elif isinstance(message.content, str):
content_str = message.content
else:
# Multiple blocks or non-text content - stringify entire structure
# 여러 블록 또는 텍스트가 아닌 콘텐츠: 전체 구조를 문자열로 변환
content_str = str(message.content)
# Check if content exceeds eviction threshold
# Using 4 chars per token as a conservative approximation (actual ratio varies by content)
# This errs on the high side to avoid premature eviction of content that might fit
# 콘텐츠가 축출 임계치를 초과하는지 확인
# token당 4 chars로 보수적으로 추정합니다(실제 비율은 콘텐츠에 따라 달라짐).
# 실제로는 들어갈 수 있는 콘텐츠를 너무 일찍 축출하지 않도록 “높게” 잡는 쪽으로 동작합니다.
if len(content_str) <= 4 * self.tool_token_limit_before_evict:
return message, None
# Write content to filesystem
# 콘텐츠를 파일 시스템에 기록
sanitized_id = sanitize_tool_call_id(message.tool_call_id)
file_path = f"/large_tool_results/{sanitized_id}"
result = resolved_backend.write(file_path, content_str)
if result.error:
return message, None
# Create truncated preview for the replacement message
# 대체 메시지에 넣을 미리보기(트렁케이트) 생성
content_sample = format_content_with_line_numbers([line[:1000] for line in content_str.splitlines()[:10]], start_line=1)
replacement_text = TOO_LARGE_TOOL_MSG.format(
tool_call_id=message.tool_call_id,
@@ -1040,7 +1037,7 @@ class FilesystemMiddleware(AgentMiddleware):
content_sample=content_sample,
)
# Always return as plain string after eviction
# 축출 후에는 항상 plain string ToolMessage로 반환
processed_message = ToolMessage(
content=replacement_text,
tool_call_id=message.tool_call_id,
@@ -1048,7 +1045,7 @@ class FilesystemMiddleware(AgentMiddleware):
return processed_message, result.files_update
def _intercept_large_tool_result(self, tool_result: ToolMessage | Command, runtime: ToolRuntime) -> ToolMessage | Command:
"""Intercept and process large tool results before they're added to state.
"""state에 추가되기 전에 큰 tool result를 가로채서 처리합니다.
Args:
tool_result: The tool result to potentially evict (ToolMessage or Command).
@@ -1108,7 +1105,7 @@ class FilesystemMiddleware(AgentMiddleware):
request: ToolCallRequest,
handler: Callable[[ToolCallRequest], ToolMessage | Command],
) -> ToolMessage | Command:
"""Check the size of the tool call result and evict to filesystem if too large.
"""도구 호출(tool call) 결과 크기를 확인하고, 너무 크면 파일 시스템으로 축출(evict)합니다.
Args:
request: The tool call request being processed.
@@ -1128,7 +1125,7 @@ class FilesystemMiddleware(AgentMiddleware):
request: ToolCallRequest,
handler: Callable[[ToolCallRequest], Awaitable[ToolMessage | Command]],
) -> ToolMessage | Command:
"""(async)Check the size of the tool call result and evict to filesystem if too large.
"""(async) tool call 결과 크기를 확인하고, 너무 크면 파일 시스템으로 축출(evict)합니다.
Args:
request: The tool call request being processed.

View File

@@ -1,23 +1,22 @@
"""Middleware for loading agent memory/context from AGENTS.md files.
"""AGENTS.md 파일로부터 에이전트 메모리/컨텍스트를 로드하는 미들웨어입니다.
This module implements support for the AGENTS.md specification (https://agents.md/),
loading memory/context from configurable sources and injecting into the system prompt.
이 모듈은 AGENTS.md 사양(https://agents.md/)을 지원하여, 설정된 소스에서 메모리/컨텍스트를
읽어들인 다음 system prompt에 주입(inject)합니다.
## Overview
## 개요
AGENTS.md files provide project-specific context and instructions to help AI agents
work effectively. Unlike skills (which are on-demand workflows), memory is always
loaded and provides persistent context.
AGENTS.md는 프로젝트별 맥락과 지침을 제공하여 AI 에이전트가 더 안정적으로 작업하도록 돕습니다.
스킬(skills)이 “필요할 때만(on-demand) 호출되는 워크플로”라면, 메모리(memory)는 항상 로드되어
지속적인(persistent) 컨텍스트를 제공합니다.
## Usage
## 사용 예시
```python
from deepagents import MemoryMiddleware
from deepagents.backends.filesystem import FilesystemBackend
# Security: FilesystemBackend allows reading/writing from the entire filesystem.
# Either ensure the agent is running within a sandbox OR add human-in-the-loop (HIL)
# approval to file operations.
# 보안 주의: FilesystemBackend는 전체 파일시스템에 대한 읽기/쓰기 권한을 가질 수 있습니다.
# 에이전트가 샌드박스에서 실행되도록 하거나, 파일 작업에 human-in-the-loop(HIL) 승인을 붙이세요.
backend = FilesystemBackend(root_dir="/")
middleware = MemoryMiddleware(
@@ -31,20 +30,18 @@ middleware = MemoryMiddleware(
agent = create_deep_agent(middleware=[middleware])
```
## Memory Sources
## 메모리 소스(Memory Sources)
Sources are simply paths to AGENTS.md files that are loaded in order and combined.
Multiple sources are concatenated in order, with all content included.
Later sources appear after earlier ones in the combined prompt.
소스는 로드할 AGENTS.md 파일 경로 리스트입니다. 소스는 지정된 순서대로 읽혀 하나로 결합되며,
뒤에 오는 소스가 결합된 프롬프트의 뒤쪽에 붙습니다.
## File Format
## 파일 형식
AGENTS.md files are standard Markdown with no required structure.
Common sections include:
- Project overview
- Build/test commands
- Code style guidelines
- Architecture notes
AGENTS.md는 일반적인 Markdown이며 필수 구조는 없습니다. 관례적으로는 아래 섹션이 자주 포함됩니다.
- 프로젝트 개요
- 빌드/테스트 명령
- 코드 스타일 가이드
- 아키텍처 노트
"""
from __future__ import annotations
@@ -73,18 +70,18 @@ logger = logging.getLogger(__name__)
class MemoryState(AgentState):
"""State schema for MemoryMiddleware.
"""MemoryMiddleware의 state 스키마입니다.
Attributes:
memory_contents: Dict mapping source paths to their loaded content.
Marked as private so it's not included in the final agent state.
memory_contents: 소스 경로 → 로드된 콘텐츠 매핑.
최종 agent state에 포함되지 않도록 private로 표시됩니다.
"""
memory_contents: NotRequired[Annotated[dict[str, str], PrivateStateAttr]]
class MemoryStateUpdate(TypedDict):
"""State update for MemoryMiddleware."""
"""MemoryMiddleware의 state 업데이트 타입입니다."""
memory_contents: dict[str, str]
@@ -152,14 +149,14 @@ MEMORY_SYSTEM_PROMPT = """<agent_memory>
class MemoryMiddleware(AgentMiddleware):
"""Middleware for loading agent memory from AGENTS.md files.
"""AGENTS.md 파일에서 에이전트 메모리를 로드하는 미들웨어입니다.
Loads memory content from configured sources and injects into the system prompt.
Supports multiple sources that are combined together.
설정된 소스에서 메모리를 로드한 뒤 system prompt에 주입합니다.
여러 소스를 결합하여 한 번에 주입하는 구성을 지원합니다.
Args:
backend: Backend instance or factory function for file operations.
sources: List of MemorySource configurations specifying paths and names.
backend: 파일 작업을 위한 backend 인스턴스 또는 팩토리 함수.
sources: 로드할 AGENTS.md 파일 경로 리스트.
"""
state_schema = MemoryState
@@ -170,31 +167,21 @@ class MemoryMiddleware(AgentMiddleware):
backend: BACKEND_TYPES,
sources: list[str],
) -> None:
"""Initialize the memory middleware.
"""메모리 미들웨어를 초기화합니다.
Args:
backend: Backend instance or factory function that takes runtime
and returns a backend. Use a factory for StateBackend.
sources: List of memory file paths to load (e.g., ["~/.deepagents/AGENTS.md",
"./.deepagents/AGENTS.md"]). Display names are automatically derived
from the paths. Sources are loaded in order.
backend: backend 인스턴스 또는 (runtime을 받아 backend를 만드는) 팩토리 함수.
`StateBackend`를 사용하려면 팩토리 형태로 전달해야 합니다.
sources: 로드할 메모리 파일 경로 리스트(예: `["~/.deepagents/AGENTS.md", "./.deepagents/AGENTS.md"]`).
표시 이름은 경로로부터 자동 유도됩니다. 소스는 지정 순서대로 로드됩니다.
"""
self._backend = backend
self.sources = sources
def _get_backend(self, state: MemoryState, runtime: Runtime, config: RunnableConfig) -> BackendProtocol:
"""Resolve backend from instance or factory.
Args:
state: Current agent state.
runtime: Runtime context for factory functions.
config: Runnable config to pass to backend factory.
Returns:
Resolved backend instance.
"""
"""Backend를 인스턴스 또는 팩토리로부터 해석(resolve)합니다."""
if callable(self._backend):
# Construct an artificial tool runtime to resolve backend factory
# backend 팩토리를 호출하기 위한 ToolRuntime을 구성합니다.
tool_runtime = ToolRuntime(
state=state,
context=runtime.context,
@@ -207,14 +194,7 @@ class MemoryMiddleware(AgentMiddleware):
return self._backend
def _format_agent_memory(self, contents: dict[str, str]) -> str:
"""Format memory with locations and contents paired together.
Args:
contents: Dict mapping source paths to content.
Returns:
Formatted string with location+content pairs wrapped in <agent_memory> tags.
"""
"""메모리 소스 경로와 콘텐츠를 짝지어 포맷팅합니다."""
if not contents:
return MEMORY_SYSTEM_PROMPT.format(agent_memory="(No memory loaded)")
@@ -234,27 +214,19 @@ class MemoryMiddleware(AgentMiddleware):
backend: BackendProtocol,
path: str,
) -> str | None:
"""Load memory content from a backend path.
Args:
backend: Backend to load from.
path: Path to the AGENTS.md file.
Returns:
File content if found, None otherwise.
"""
"""backend에서 특정 경로의 메모리(AGENTS.md) 콘텐츠를 로드합니다."""
results = await backend.adownload_files([path])
# Should get exactly one response for one path
# 단일 path에 대해 단일 응답이 와야 합니다.
if len(results) != 1:
raise AssertionError(f"Expected 1 response for path {path}, got {len(results)}")
response = results[0]
if response.error is not None:
# For now, memory files are treated as optional. file_not_found is expected
# and we skip silently to allow graceful degradation.
# 현재는 메모리 파일을 optional로 취급합니다.
# file_not_found는 정상적으로 발생할 수 있으므로 조용히 스킵하여 점진적 저하(graceful degradation)를 허용합니다.
if response.error == "file_not_found":
return None
# Other errors should be raised
# 그 외 오류는 예외로 올립니다.
raise ValueError(f"Failed to download {path}: {response.error}")
if response.content is not None:
@@ -267,7 +239,7 @@ class MemoryMiddleware(AgentMiddleware):
backend: BackendProtocol,
path: str,
) -> str | None:
"""Load memory content from a backend path synchronously.
"""backend에서 특정 경로의 메모리(AGENTS.md) 콘텐츠를 동기로 로드합니다.
Args:
backend: Backend to load from.
@@ -277,17 +249,17 @@ class MemoryMiddleware(AgentMiddleware):
File content if found, None otherwise.
"""
results = backend.download_files([path])
# Should get exactly one response for one path
# 단일 path에 대해 단일 응답이 와야 합니다.
if len(results) != 1:
raise AssertionError(f"Expected 1 response for path {path}, got {len(results)}")
response = results[0]
if response.error is not None:
# For now, memory files are treated as optional. file_not_found is expected
# and we skip silently to allow graceful degradation.
# 현재는 메모리 파일을 optional로 취급합니다.
# file_not_found는 정상적으로 발생할 수 있으므로 조용히 스킵하여 점진적 저하(graceful degradation)를 허용합니다.
if response.error == "file_not_found":
return None
# Other errors should be raised
# 그 외 오류는 예외로 올립니다.
raise ValueError(f"Failed to download {path}: {response.error}")
if response.content is not None:
@@ -296,7 +268,7 @@ class MemoryMiddleware(AgentMiddleware):
return None
def before_agent(self, state: MemoryState, runtime: Runtime, config: RunnableConfig) -> MemoryStateUpdate | None:
"""Load memory content before agent execution (synchronous).
"""에이전트 실행 전에 메모리 콘텐츠를 로드합니다(동기).
Loads memory from all configured sources and stores in state.
Only loads if not already present in state.
@@ -309,7 +281,7 @@ class MemoryMiddleware(AgentMiddleware):
Returns:
State update with memory_contents populated.
"""
# Skip if already loaded
# 이미 로드되어 있으면 스킵
if "memory_contents" in state:
return None
@@ -325,7 +297,7 @@ class MemoryMiddleware(AgentMiddleware):
return MemoryStateUpdate(memory_contents=contents)
async def abefore_agent(self, state: MemoryState, runtime: Runtime, config: RunnableConfig) -> MemoryStateUpdate | None:
"""Load memory content before agent execution.
"""에이전트 실행 전에 메모리 콘텐츠를 로드합니다(async).
Loads memory from all configured sources and stores in state.
Only loads if not already present in state.
@@ -338,7 +310,7 @@ class MemoryMiddleware(AgentMiddleware):
Returns:
State update with memory_contents populated.
"""
# Skip if already loaded
# 이미 로드되어 있으면 스킵
if "memory_contents" in state:
return None
@@ -354,7 +326,7 @@ class MemoryMiddleware(AgentMiddleware):
return MemoryStateUpdate(memory_contents=contents)
def modify_request(self, request: ModelRequest) -> ModelRequest:
"""Inject memory content into the system prompt.
"""메모리 콘텐츠를 system prompt에 주입합니다.
Args:
request: Model request to modify.
@@ -377,7 +349,7 @@ class MemoryMiddleware(AgentMiddleware):
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse],
) -> ModelResponse:
"""Wrap model call to inject memory into system prompt.
"""System prompt에 메모리를 주입한 뒤 model call을 수행하도록 감쌉니다.
Args:
request: Model request being processed.
@@ -394,7 +366,7 @@ class MemoryMiddleware(AgentMiddleware):
request: ModelRequest,
handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
) -> ModelResponse:
"""Async wrap model call to inject memory into system prompt.
"""(async) system prompt에 메모리를 주입한 뒤 model call을 수행하도록 감쌉니다.
Args:
request: Model request being processed.

View File

@@ -1,4 +1,4 @@
"""Middleware to patch dangling tool calls in the messages history."""
"""메시지 히스토리의 끊긴(tool message가 없는) tool call을 보정하는 미들웨어입니다."""
from typing import Any
@@ -9,16 +9,16 @@ from langgraph.types import Overwrite
class PatchToolCallsMiddleware(AgentMiddleware):
"""Middleware to patch dangling tool calls in the messages history."""
"""메시지 히스토리의 끊긴(tool message가 없는) tool call을 보정합니다."""
def before_agent(self, state: AgentState, runtime: Runtime[Any]) -> dict[str, Any] | None: # noqa: ARG002
"""Before the agent runs, handle dangling tool calls from any AIMessage."""
"""에이전트 실행 전에, AIMessage에 남은 끊긴 tool call을 처리합니다."""
messages = state["messages"]
if not messages or len(messages) == 0:
return None
patched_messages = []
# Iterate over the messages and add any dangling tool calls
# 메시지를 순회하면서 끊긴 tool call이 있으면 ToolMessage를 보완합니다.
for i, msg in enumerate(messages):
patched_messages.append(msg)
if msg.type == "ai" and msg.tool_calls:
@@ -28,7 +28,7 @@ class PatchToolCallsMiddleware(AgentMiddleware):
None,
)
if corresponding_tool_msg is None:
# We have a dangling tool call which needs a ToolMessage
# ToolMessage가 누락된 끊긴 tool call이므로 보정합니다.
tool_msg = (
f"Tool call {tool_call['name']} with id {tool_call['id']} was "
"cancelled - another message came in before it could be completed."

View File

@@ -1,34 +1,33 @@
"""Skills middleware for loading and exposing agent skills to the system prompt.
"""에이전트 스킬(skills)을 로드하고 system prompt에 노출하는 미들웨어입니다.
This module implements Anthropic's agent skills pattern with progressive disclosure,
loading skills from backend storage via configurable sources.
이 모듈은 Anthropic agent skills 패턴(점진적 공개, progressive disclosure)을 구현하며,
backend 스토리지로부터 스킬을 로드하기 위해 “소스(sources)”를 설정할 수 있게 합니다.
## Architecture
## 아키텍처
Skills are loaded from one or more **sources** - paths in a backend where skills are
organized. Sources are loaded in order, with later sources overriding earlier ones
when skills have the same name (last one wins). This enables layering: base -> user
-> project -> team skills.
스킬은 backend 내에서 스킬들이 정리되어 있는 경로(prefix)인 **sources**로부터 로드됩니다.
sources는 지정된 순서대로 로드되며, 동일한 스킬 이름이 충돌할 경우 뒤에 오는 source가 우선합니다
(last one wins). 이를 통해 `base -> user -> project -> team` 같은 레이어링이 가능합니다.
The middleware uses backend APIs exclusively (no direct filesystem access), making it
portable across different storage backends (filesystem, state, remote storage, etc.).
이 미들웨어는 backend API만 사용하며(직접 파일시스템 접근 없음), 따라서 filesystem/state/remote storage 등
다양한 backend 구현에 이식(portable) 가능합니다.
For StateBackend (ephemeral/in-memory), use a factory function:
StateBackend(ephemeral/in-memory)를 사용할 때는 팩토리 함수를 전달하세요.
```python
SkillsMiddleware(backend=lambda rt: StateBackend(rt), ...)
```
## Skill Structure
## 스킬 디렉토리 구조
Each skill is a directory containing a SKILL.md file with YAML frontmatter:
각 스킬은 YAML frontmatter가 포함된 `SKILL.md`를 가진 디렉토리입니다.
```
/skills/user/web-research/
├── SKILL.md # Required: YAML frontmatter + markdown instructions
└── helper.py # Optional: supporting files
├── SKILL.md # 필수: YAML frontmatter + Markdown 지침
└── helper.py # 선택: 보조 파일(스크립트/데이터 등)
```
SKILL.md format:
`SKILL.md` 형식 예시:
```markdown
---
name: web-research
@@ -43,35 +42,34 @@ license: MIT
...
```
## Skill Metadata (SkillMetadata)
## 스킬 메타데이터(SkillMetadata)
Parsed from YAML frontmatter per Agent Skills specification:
- `name`: Skill identifier (max 64 chars, lowercase alphanumeric and hyphens)
- `description`: What the skill does (max 1024 chars)
- `path`: Backend path to the SKILL.md file
YAML frontmatter에서 Agent Skills 사양에 따라 파싱되는 필드:
- `name`: 스킬 식별자(최대 64자, 소문자 영숫자+하이픈)
- `description`: 스킬 설명(최대 1024자)
- `path`: backend 내 `SKILL.md`의 경로
- Optional: `license`, `compatibility`, `metadata`, `allowed_tools`
## Sources
## 소스(Sources)
Sources are simply paths to skill directories in the backend. The source name is
derived from the last component of the path (e.g., "/skills/user/" -> "user").
source는 backend 내 “스킬 디렉토리들의 루트 경로”입니다.
source의 표시 이름은 경로의 마지막 컴포넌트로부터 유도됩니다(예: `"/skills/user/" -> "user"`).
Example sources:
```python
[
"/skills/user/",
"/skills/project/"
"/skills/project/",
]
```
## Path Conventions
## 경로 규칙
All paths use POSIX conventions (forward slashes) via `PurePosixPath`:
- Backend paths: "/skills/user/web-research/SKILL.md"
- Virtual, platform-independent
- Backends handle platform-specific conversions as needed
모든 경로는 `PurePosixPath`를 통해 POSIX 표기(슬래시 `/`)를 사용합니다.
- backend 경로 예: `"/skills/user/web-research/SKILL.md"`
- 플랫폼 독립적인 가상 경로(virtual path)
- 실제 플랫폼별 변환은 backend가 필요 시 처리합니다.
## Usage
## 사용 예시
```python
from deepagents.backends.state import StateBackend
@@ -116,7 +114,7 @@ from langgraph.runtime import Runtime
logger = logging.getLogger(__name__)
# Security: Maximum size for SKILL.md files to prevent DoS attacks (10MB)
# 보안: DoS 공격을 방지하기 위한 SKILL.md 최대 크기(10MB)
MAX_SKILL_FILE_SIZE = 10 * 1024 * 1024
# Agent Skills specification constraints (https://agentskills.io/specification)
@@ -125,46 +123,46 @@ MAX_SKILL_DESCRIPTION_LENGTH = 1024
class SkillMetadata(TypedDict):
"""Metadata for a skill per Agent Skills specification (https://agentskills.io/specification)."""
"""Agent Skills 사양(https://agentskills.io/specification)에 따른 스킬 메타데이터입니다."""
name: str
"""Skill identifier (max 64 chars, lowercase alphanumeric and hyphens)."""
"""스킬 식별자(최대 64자, 소문자 영숫자 및 하이픈)."""
description: str
"""What the skill does (max 1024 chars)."""
"""스킬 설명(최대 1024자)."""
path: str
"""Path to the SKILL.md file."""
"""`SKILL.md` 파일의 경로."""
license: str | None
"""License name or reference to bundled license file."""
"""라이선스 이름 또는 번들된 라이선스 파일에 대한 참조."""
compatibility: str | None
"""Environment requirements (max 500 chars)."""
"""환경 요구사항(최대 500자)."""
metadata: dict[str, str]
"""Arbitrary key-value mapping for additional metadata."""
"""추가 메타데이터를 위한 임의의 key-value 맵."""
allowed_tools: list[str]
"""Space-delimited list of pre-approved tools. (Experimental)"""
"""사전 승인된 도구 목록(공백 구분). (실험적)"""
class SkillsState(AgentState):
"""State for the skills middleware."""
"""SkillsMiddleware의 state 스키마입니다."""
skills_metadata: NotRequired[Annotated[list[SkillMetadata], PrivateStateAttr]]
"""List of loaded skill metadata from all configured sources."""
"""설정된 모든 source에서 로드된 스킬 메타데이터 목록."""
class SkillsStateUpdate(TypedDict):
"""State update for the skills middleware."""
"""SkillsMiddleware의 state 업데이트 타입입니다."""
skills_metadata: list[SkillMetadata]
"""List of loaded skill metadata to merge into state."""
"""state에 병합할 스킬 메타데이터 목록."""
def _validate_skill_name(name: str, directory_name: str) -> tuple[bool, str]:
"""Validate skill name per Agent Skills specification.
"""Agent Skills 사양에 따라 스킬 이름을 검증합니다.
Requirements per spec:
- Max 64 characters
@@ -197,7 +195,7 @@ def _parse_skill_metadata(
skill_path: str,
directory_name: str,
) -> SkillMetadata | None:
"""Parse YAML frontmatter from SKILL.md content.
"""SKILL.md에서 YAML frontmatter를 파싱합니다.
Extracts metadata per Agent Skills specification from YAML frontmatter delimited
by --- markers at the start of the content.
@@ -280,7 +278,7 @@ def _parse_skill_metadata(
def _list_skills(backend: BackendProtocol, source_path: str) -> list[SkillMetadata]:
"""List all skills from a backend source.
"""하나의 source(backend 경로)에서 모든 스킬을 나열합니다.
Scans backend for subdirectories containing SKILL.md files, downloads their content,
parses YAML frontmatter, and returns skill metadata.
@@ -302,7 +300,7 @@ def _list_skills(backend: BackendProtocol, source_path: str) -> list[SkillMetada
skills: list[SkillMetadata] = []
items = backend.ls_info(base_path)
# Find all skill directories (directories containing SKILL.md)
# 스킬 디렉토리 목록(SKILL.md를 담고 있을 수 있는 하위 디렉토리)을 수집
skill_dirs = []
for item in items:
if not item.get("is_dir"):
@@ -312,10 +310,10 @@ def _list_skills(backend: BackendProtocol, source_path: str) -> list[SkillMetada
if not skill_dirs:
return []
# For each skill directory, check if SKILL.md exists and download it
# 각 스킬 디렉토리마다 SKILL.md 존재 여부를 확인하고 다운로드합니다.
skill_md_paths = []
for skill_dir_path in skill_dirs:
# Construct SKILL.md path using PurePosixPath for safe, standardized path operations
# 안전하고 표준화된 경로 연산을 위해 PurePosixPath로 SKILL.md 경로를 구성합니다.
skill_dir = PurePosixPath(skill_dir_path)
skill_md_path = str(skill_dir / "SKILL.md")
skill_md_paths.append((skill_dir_path, skill_md_path))
@@ -323,10 +321,10 @@ def _list_skills(backend: BackendProtocol, source_path: str) -> list[SkillMetada
paths_to_download = [skill_md_path for _, skill_md_path in skill_md_paths]
responses = backend.download_files(paths_to_download)
# Parse each downloaded SKILL.md
# 다운로드된 각 SKILL.md를 파싱합니다.
for (skill_dir_path, skill_md_path), response in zip(skill_md_paths, responses, strict=True):
if response.error:
# Skill doesn't have a SKILL.md, skip it
# SKILL.md가 없는 디렉토리는 스킵
continue
if response.content is None:
@@ -339,10 +337,10 @@ def _list_skills(backend: BackendProtocol, source_path: str) -> list[SkillMetada
logger.warning("Error decoding %s: %s", skill_md_path, e)
continue
# Extract directory name from path using PurePosixPath
# PurePosixPath로 디렉토리 이름을 추출합니다.
directory_name = PurePosixPath(skill_dir_path).name
# Parse metadata
# 메타데이터 파싱
skill_metadata = _parse_skill_metadata(
content=content,
skill_path=skill_md_path,
@@ -355,7 +353,7 @@ def _list_skills(backend: BackendProtocol, source_path: str) -> list[SkillMetada
async def _alist_skills(backend: BackendProtocol, source_path: str) -> list[SkillMetadata]:
"""List all skills from a backend source (async version).
"""하나의 source(backend 경로)에서 모든 스킬을 나열합니다(async 버전).
Scans backend for subdirectories containing SKILL.md files, downloads their content,
parses YAML frontmatter, and returns skill metadata.
@@ -377,7 +375,7 @@ async def _alist_skills(backend: BackendProtocol, source_path: str) -> list[Skil
skills: list[SkillMetadata] = []
items = await backend.als_info(base_path)
# Find all skill directories (directories containing SKILL.md)
# 스킬 디렉토리 목록(SKILL.md를 담고 있을 수 있는 하위 디렉토리)을 수집
skill_dirs = []
for item in items:
if not item.get("is_dir"):
@@ -387,10 +385,10 @@ async def _alist_skills(backend: BackendProtocol, source_path: str) -> list[Skil
if not skill_dirs:
return []
# For each skill directory, check if SKILL.md exists and download it
# 각 스킬 디렉토리마다 SKILL.md 존재 여부를 확인하고 다운로드합니다.
skill_md_paths = []
for skill_dir_path in skill_dirs:
# Construct SKILL.md path using PurePosixPath for safe, standardized path operations
# 안전하고 표준화된 경로 연산을 위해 PurePosixPath로 SKILL.md 경로를 구성합니다.
skill_dir = PurePosixPath(skill_dir_path)
skill_md_path = str(skill_dir / "SKILL.md")
skill_md_paths.append((skill_dir_path, skill_md_path))
@@ -398,10 +396,10 @@ async def _alist_skills(backend: BackendProtocol, source_path: str) -> list[Skil
paths_to_download = [skill_md_path for _, skill_md_path in skill_md_paths]
responses = await backend.adownload_files(paths_to_download)
# Parse each downloaded SKILL.md
# 다운로드된 각 SKILL.md를 파싱합니다.
for (skill_dir_path, skill_md_path), response in zip(skill_md_paths, responses, strict=True):
if response.error:
# Skill doesn't have a SKILL.md, skip it
# SKILL.md가 없는 디렉토리는 스킵
continue
if response.content is None:
@@ -414,10 +412,10 @@ async def _alist_skills(backend: BackendProtocol, source_path: str) -> list[Skil
logger.warning("Error decoding %s: %s", skill_md_path, e)
continue
# Extract directory name from path using PurePosixPath
# PurePosixPath로 디렉토리 이름을 추출합니다.
directory_name = PurePosixPath(skill_dir_path).name
# Parse metadata
# 메타데이터 파싱
skill_metadata = _parse_skill_metadata(
content=content,
skill_path=skill_md_path,
@@ -472,7 +470,7 @@ Remember: Skills make you more capable and consistent. When in doubt, check if a
class SkillsMiddleware(AgentMiddleware):
"""Middleware for loading and exposing agent skills to the system prompt.
"""에이전트 스킬을 로드하고 system prompt에 노출하는 미들웨어입니다.
Loads skills from backend sources and injects them into the system prompt
using progressive disclosure (metadata first, full content on demand).
@@ -501,7 +499,7 @@ class SkillsMiddleware(AgentMiddleware):
state_schema = SkillsState
def __init__(self, *, backend: BACKEND_TYPES, sources: list[str]) -> None:
"""Initialize the skills middleware.
"""스킬 미들웨어를 초기화합니다.
Args:
backend: Backend instance or factory function that takes runtime and returns a backend.
@@ -513,7 +511,7 @@ class SkillsMiddleware(AgentMiddleware):
self.system_prompt_template = SKILLS_SYSTEM_PROMPT
def _get_backend(self, state: SkillsState, runtime: Runtime, config: RunnableConfig) -> BackendProtocol:
"""Resolve backend from instance or factory.
"""백엔드 인스턴스/팩토리로부터 실제 백엔드를 해석(resolve)합니다.
Args:
state: Current agent state.
@@ -524,7 +522,7 @@ class SkillsMiddleware(AgentMiddleware):
Resolved backend instance
"""
if callable(self._backend):
# Construct an artificial tool runtime to resolve backend factory
# backend 팩토리를 호출하기 위한 ToolRuntime을 구성합니다.
tool_runtime = ToolRuntime(
state=state,
context=runtime.context,
@@ -541,7 +539,7 @@ class SkillsMiddleware(AgentMiddleware):
return self._backend
def _format_skills_locations(self) -> str:
"""Format skills locations for display in system prompt."""
"""System prompt에 표시할 skills location 섹션을 포맷팅합니다."""
locations = []
for i, source_path in enumerate(self.sources):
name = PurePosixPath(source_path.rstrip("/")).name.capitalize()
@@ -550,7 +548,7 @@ class SkillsMiddleware(AgentMiddleware):
return "\n".join(locations)
def _format_skills_list(self, skills: list[SkillMetadata]) -> str:
"""Format skills metadata for display in system prompt."""
"""System prompt에 표시할 skills 목록을 포맷팅합니다."""
if not skills:
paths = [f"{source_path}" for source_path in self.sources]
return f"(No skills available yet. You can create skills in {' or '.join(paths)})"
@@ -563,7 +561,7 @@ class SkillsMiddleware(AgentMiddleware):
return "\n".join(lines)
def modify_request(self, request: ModelRequest) -> ModelRequest:
"""Inject skills documentation into a model request's system prompt.
"""모델 요청의 system prompt에 skills 섹션을 주입합니다.
Args:
request: Model request to modify
@@ -588,7 +586,7 @@ class SkillsMiddleware(AgentMiddleware):
return request.override(system_prompt=system_prompt)
def before_agent(self, state: SkillsState, runtime: Runtime, config: RunnableConfig) -> SkillsStateUpdate | None:
"""Load skills metadata before agent execution (synchronous).
"""에이전트 실행 전에 스킬 메타데이터를 로드합니다(동기).
Runs before each agent interaction to discover available skills from all
configured sources. Re-loads on every call to capture any changes.
@@ -604,16 +602,16 @@ class SkillsMiddleware(AgentMiddleware):
Returns:
State update with skills_metadata populated, or None if already present
"""
# Skip if skills_metadata is already present in state (even if empty)
# state에 skills_metadata가 이미 있으면(비어 있어도) 스킵
if "skills_metadata" in state:
return None
# Resolve backend (supports both direct instances and factory functions)
# backend 해석(인스턴스/팩토리 모두 지원)
backend = self._get_backend(state, runtime, config)
all_skills: dict[str, SkillMetadata] = {}
# Load skills from each source in order
# Later sources override earlier ones (last one wins)
# source를 순서대로 로드합니다.
# 뒤에 오는 source가 앞의 source를 덮어씁니다(last one wins).
for source_path in self.sources:
source_skills = _list_skills(backend, source_path)
for skill in source_skills:
@@ -623,7 +621,7 @@ class SkillsMiddleware(AgentMiddleware):
return SkillsStateUpdate(skills_metadata=skills)
async def abefore_agent(self, state: SkillsState, runtime: Runtime, config: RunnableConfig) -> SkillsStateUpdate | None:
"""Load skills metadata before agent execution (async).
"""에이전트 실행 전에 스킬 메타데이터를 로드합니다(async).
Runs before each agent interaction to discover available skills from all
configured sources. Re-loads on every call to capture any changes.
@@ -639,16 +637,16 @@ class SkillsMiddleware(AgentMiddleware):
Returns:
State update with skills_metadata populated, or None if already present
"""
# Skip if skills_metadata is already present in state (even if empty)
# state에 skills_metadata가 이미 있으면(비어 있어도) 스킵
if "skills_metadata" in state:
return None
# Resolve backend (supports both direct instances and factory functions)
# backend 해석(인스턴스/팩토리 모두 지원)
backend = self._get_backend(state, runtime, config)
all_skills: dict[str, SkillMetadata] = {}
# Load skills from each source in order
# Later sources override earlier ones (last one wins)
# source를 순서대로 로드합니다.
# 뒤에 오는 source가 앞의 source를 덮어씁니다(last one wins).
for source_path in self.sources:
source_skills = await _alist_skills(backend, source_path)
for skill in source_skills:
@@ -662,7 +660,7 @@ class SkillsMiddleware(AgentMiddleware):
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse],
) -> ModelResponse:
"""Inject skills documentation into the system prompt.
"""System prompt에 skills 섹션을 주입한 뒤 model call을 수행하도록 감쌉니다.
Args:
request: Model request being processed
@@ -679,7 +677,7 @@ class SkillsMiddleware(AgentMiddleware):
request: ModelRequest,
handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
) -> ModelResponse:
"""Inject skills documentation into the system prompt (async version).
"""(async) System prompt에 skills 섹션을 주입한 뒤 model call을 수행하도록 감쌉니다.
Args:
request: Model request being processed

View File

@@ -1,4 +1,4 @@
"""Middleware for providing subagents to an agent via a `task` tool."""
"""`task` 도구를 통해 서브에이전트를 제공하는 미들웨어입니다."""
from collections.abc import Awaitable, Callable, Sequence
from typing import Any, NotRequired, TypedDict, cast
@@ -15,57 +15,54 @@ from langgraph.types import Command
class SubAgent(TypedDict):
"""Specification for an agent.
"""서브에이전트 명세(spec)입니다.
When specifying custom agents, the `default_middleware` from `SubAgentMiddleware`
will be applied first, followed by any `middleware` specified in this spec.
To use only custom middleware without the defaults, pass `default_middleware=[]`
to `SubAgentMiddleware`.
커스텀 서브에이전트를 지정할 때, `SubAgentMiddleware`의 `default_middleware`가 먼저 적용되고,
그 다음 이 spec의 `middleware`가 뒤에 추가됩니다.
기본 미들웨어 없이 커스텀 미들웨어만 사용하려면 `SubAgentMiddleware(default_middleware=[])`로 설정하세요.
"""
name: str
"""The name of the agent."""
"""에이전트 이름."""
description: str
"""The description of the agent."""
"""에이전트 설명."""
system_prompt: str
"""The system prompt to use for the agent."""
"""서브에이전트에 사용할 system prompt."""
tools: Sequence[BaseTool | Callable | dict[str, Any]]
"""The tools to use for the agent."""
"""서브에이전트에 제공할 도구 목록."""
model: NotRequired[str | BaseChatModel]
"""The model for the agent. Defaults to `default_model`."""
"""서브에이전트 모델(기본값: `default_model`)."""
middleware: NotRequired[list[AgentMiddleware]]
"""Additional middleware to append after `default_middleware`."""
"""`default_middleware` 뒤에 추가로 붙일 미들웨어."""
interrupt_on: NotRequired[dict[str, bool | InterruptOnConfig]]
"""The tool configs to use for the agent."""
"""서브에이전트에 적용할 tool interrupt 설정."""
class CompiledSubAgent(TypedDict):
"""A pre-compiled agent spec."""
"""사전 컴파일된(pre-compiled) 서브에이전트 명세입니다."""
name: str
"""The name of the agent."""
"""에이전트 이름."""
description: str
"""The description of the agent."""
"""에이전트 설명."""
runnable: Runnable
"""The Runnable to use for the agent."""
"""서브에이전트 실행에 사용할 `Runnable`."""
DEFAULT_SUBAGENT_PROMPT = "In order to complete the objective that the user asks of you, you have access to a number of standard tools."
# State keys that are excluded when passing state to subagents and when returning
# updates from subagents.
# When returning updates:
# 1. The messages key is handled explicitly to ensure only the final message is included
# 2. The todos and structured_response keys are excluded as they do not have a defined reducer
# and no clear meaning for returning them from a subagent to the main agent.
# 서브에이전트 호출 시 전달하지 않거나, 서브에이전트 결과를 메인으로 되돌릴 때 제외하는 state 키들입니다.
# 반환 업데이트 처리 시:
# 1) `messages`는 최종 메시지만 포함되도록 별도로 처리합니다.
# 2) `todos`, `structured_response`는 reducer가 정의되어 있지 않고 메인 에이전트로 반환할 의미가 불명확하므로 제외합니다.
_EXCLUDED_STATE_KEYS = {"messages", "todos", "structured_response"}
TASK_TOOL_DESCRIPTION = """Launch an ephemeral subagent to handle complex, multi-step independent tasks with isolated context windows.
@@ -219,29 +216,29 @@ def _get_subagents(
subagents: list[SubAgent | CompiledSubAgent],
general_purpose_agent: bool,
) -> tuple[dict[str, Any], list[str]]:
"""Create subagent instances from specifications.
"""서브에이전트 spec에서 runnable 인스턴스를 생성합니다.
Args:
default_model: Default model for subagents that don't specify one.
default_tools: Default tools for subagents that don't specify tools.
default_middleware: Middleware to apply to all subagents. If `None`,
no default middleware is applied.
default_interrupt_on: The tool configs to use for the default general-purpose subagent. These
are also the fallback for any subagents that don't specify their own tool configs.
subagents: List of agent specifications or pre-compiled agents.
general_purpose_agent: Whether to include a general-purpose subagent.
default_model: 서브에이전트가 별도 모델을 지정하지 않을 때 사용할 기본 모델.
default_tools: 서브에이전트가 별도 도구를 지정하지 않을 때 사용할 기본 도구.
default_middleware: 모든 서브에이전트에 공통 적용할 미들웨어. `None`이면 적용하지 않습니다.
default_interrupt_on: 기본 general-purpose 서브에이전트에 사용할 tool interrupt 설정.
또한 개별 서브에이전트가 별도 설정을 지정하지 않았을 때의 fallback입니다.
subagents: 서브에이전트 spec 또는 사전 컴파일된 agent 목록.
general_purpose_agent: general-purpose 서브에이전트를 포함할지 여부.
Returns:
Tuple of (agent_dict, description_list) where agent_dict maps agent names
to runnable instances and description_list contains formatted descriptions.
`(agent_dict, description_list)` 튜플.
`agent_dict`는 에이전트 이름 → runnable 인스턴스를 매핑하고,
`description_list`는 task 도구에 주입될 포맷된 설명 목록입니다.
"""
# Use empty list if None (no default middleware)
# `None`이면 빈 리스트로 대체(기본 미들웨어 미적용)
default_subagent_middleware = default_middleware or []
agents: dict[str, Any] = {}
subagent_descriptions = []
# Create general-purpose agent if enabled
# general-purpose 에이전트(선택)를 생성
if general_purpose_agent:
general_purpose_middleware = [*default_subagent_middleware]
if default_interrupt_on:
@@ -255,7 +252,7 @@ def _get_subagents(
agents["general-purpose"] = general_purpose_subagent
subagent_descriptions.append(f"- general-purpose: {DEFAULT_GENERAL_PURPOSE_DESCRIPTION}")
# Process custom subagents
# 커스텀 서브에이전트를 처리
for agent_ in subagents:
subagent_descriptions.append(f"- {agent_['name']}: {agent_['description']}")
if "runnable" in agent_:
@@ -291,7 +288,7 @@ def _create_task_tool(
general_purpose_agent: bool,
task_description: str | None = None,
) -> BaseTool:
"""Create a task tool for invoking subagents.
"""서브에이전트를 호출하는 `task` 도구를 생성합니다.
Args:
default_model: Default model for subagents.
@@ -319,7 +316,7 @@ def _create_task_tool(
def _return_command_with_state_update(result: dict, tool_call_id: str) -> Command:
state_update = {k: v for k, v in result.items() if k not in _EXCLUDED_STATE_KEYS}
# Strip trailing whitespace to prevent API errors with Anthropic
# Anthropic API에서 오류가 나지 않도록 trailing whitespace를 제거합니다.
message_text = result["messages"][-1].text.rstrip() if result["messages"][-1].text else ""
return Command(
update={
@@ -329,14 +326,14 @@ def _create_task_tool(
)
def _validate_and_prepare_state(subagent_type: str, description: str, runtime: ToolRuntime) -> tuple[Runnable, dict]:
"""Prepare state for invocation."""
"""서브에이전트 호출을 위한 state를 준비합니다."""
subagent = subagent_graphs[subagent_type]
# Create a new state dict to avoid mutating the original
subagent_state = {k: v for k, v in runtime.state.items() if k not in _EXCLUDED_STATE_KEYS}
subagent_state["messages"] = [HumanMessage(content=description)]
return subagent, subagent_state
# Use custom description if provided, otherwise use default template
# 커스텀 설명이 주어지면 사용하고, 아니면 기본 템플릿을 사용합니다.
if task_description is None:
task_description = TASK_TOOL_DESCRIPTION.format(available_agents=subagent_description_str)
elif "{available_agents}" in task_description:
@@ -382,7 +379,7 @@ def _create_task_tool(
class SubAgentMiddleware(AgentMiddleware):
"""Middleware for providing subagents to an agent via a `task` tool.
"""`task` 도구를 통해 에이전트에 서브에이전트를 제공하는 미들웨어입니다.
This middleware adds a `task` tool to the agent that can be used to invoke subagents.
Subagents are useful for handling complex tasks that require multiple steps, or tasks
@@ -454,7 +451,7 @@ class SubAgentMiddleware(AgentMiddleware):
general_purpose_agent: bool = True,
task_description: str | None = None,
) -> None:
"""Initialize the SubAgentMiddleware."""
"""SubAgentMiddleware를 초기화합니다."""
super().__init__()
self.system_prompt = system_prompt
task_tool = _create_task_tool(
@@ -473,7 +470,7 @@ class SubAgentMiddleware(AgentMiddleware):
request: ModelRequest,
handler: Callable[[ModelRequest], ModelResponse],
) -> ModelResponse:
"""Update the system prompt to include instructions on using subagents."""
"""System prompt에 서브에이전트 사용 지침을 포함하도록 업데이트합니다."""
if self.system_prompt is not None:
system_prompt = request.system_prompt + "\n\n" + self.system_prompt if request.system_prompt else self.system_prompt
return handler(request.override(system_prompt=system_prompt))
@@ -484,7 +481,7 @@ class SubAgentMiddleware(AgentMiddleware):
request: ModelRequest,
handler: Callable[[ModelRequest], Awaitable[ModelResponse]],
) -> ModelResponse:
"""(async) Update the system prompt to include instructions on using subagents."""
"""(async) System prompt에 서브에이전트 사용 지침을 포함하도록 업데이트합니다."""
if self.system_prompt is not None:
system_prompt = request.system_prompt + "\n\n" + self.system_prompt if request.system_prompt else self.system_prompt
return await handler(request.override(system_prompt=system_prompt))

View File

@@ -1,4 +1,4 @@
"""Implement harbor backend."""
"""Harbor backend 구현입니다."""
import base64
import shlex
@@ -15,22 +15,21 @@ from harbor.environments.base import BaseEnvironment
class HarborSandbox(SandboxBackendProtocol):
"""A sandbox implementation without assuming that python3 is available."""
"""python3 존재를 가정하지 않는 샌드박스 구현체입니다."""
def __init__(self, environment: BaseEnvironment) -> None:
"""Initialize HarborSandbox with the given environment."""
"""주어진 environment로 HarborSandbox를 초기화합니다."""
self.environment = environment
async def aexecute(
self,
command: str,
) -> ExecuteResponse:
"""Execute a bash command in the task environment."""
"""작업 환경에서 bash 커맨드를 실행합니다(async)."""
result = await self.environment.exec(command)
# These errors appear in harbor environments when running bash commands
# in non-interactive/non-TTY contexts. They're harmless artifacts.
# Filter them from both stdout and stderr, then collect them to show in stderr.
# Harbor 환경에서 non-interactive/non-TTY 컨텍스트로 bash를 실행할 때 자주 등장하는 오류 메시지들입니다.
# 대부분 무해한 아티팩트이므로 stdout/stderr에서 제거한 뒤, stderr에만 정리해서 붙입니다.
error_messages = [
"bash: cannot set terminal process group (-1): Inappropriate ioctl for device",
"bash: no job control in this shell",
@@ -40,7 +39,7 @@ class HarborSandbox(SandboxBackendProtocol):
stdout = result.stdout or ""
stderr = result.stderr or ""
# Collect the bash messages if they appear (to move to stderr)
# bash 메시지가 있으면 수집하여(stderr로 이동)
bash_messages = []
for error_msg in error_messages:
if error_msg in stdout:
@@ -52,12 +51,12 @@ class HarborSandbox(SandboxBackendProtocol):
stdout = stdout.strip()
stderr = stderr.strip()
# Add bash messages to stderr
# bash 메시지를 stderr에 추가
if bash_messages:
bash_msg_text = "\n".join(bash_messages)
stderr = f"{bash_msg_text}\n{stderr}".strip() if stderr else bash_msg_text
# Only append stderr label if there's actual stderr content
# stderr가 실제로 있을 때만 라벨을 붙입니다.
if stderr:
output = stdout + "\n\n stderr: " + stderr if stdout else "\n stderr: " + stderr
else:
@@ -71,12 +70,12 @@ class HarborSandbox(SandboxBackendProtocol):
self,
command: str,
) -> ExecuteResponse:
"""Execute a bash command in the task environment."""
"""작업 환경에서 bash 커맨드를 실행합니다."""
raise NotImplementedError("This backend only supports async execution")
@property
def id(self) -> str:
"""Unique identifier for the sandbox backend."""
"""샌드박스 백엔드 인스턴스의 고유 식별자."""
return self.environment.session_id
async def aread(
@@ -85,11 +84,11 @@ class HarborSandbox(SandboxBackendProtocol):
offset: int = 0,
limit: int = 2000,
) -> str:
"""Read file content with line numbers using shell commands."""
# Escape file path for shell
"""셸 커맨드를 이용해 파일을 읽고 라인 번호와 함께 반환합니다(async)."""
# 셸에서 안전하게 쓰도록 경로를 escape
safe_path = shlex.quote(file_path)
# Check if file exists and handle empty files
# 파일 존재 여부 확인 및 빈 파일 처리
cmd = f"""
if [ ! -f {safe_path} ]; then
echo "Error: File not found"

View File

@@ -1,4 +1,4 @@
"""A wrapper for DeepAgents to run in Harbor environments."""
"""Harbor 환경에서 DeepAgents를 실행하기 위한 래퍼(wrapper)입니다."""
import json
import os
@@ -7,14 +7,11 @@ from datetime import datetime, timezone
from pathlib import Path
from deepagents import create_deep_agent
from deepagents_cli.agent import create_cli_agent
from dotenv import load_dotenv
from harbor.agents.base import BaseAgent
from harbor.environments.base import BaseEnvironment
from harbor.models.agent.context import AgentContext
# Load .env file if present
load_dotenv()
from deepagents_cli.agent import create_cli_agent
from harbor.models.trajectories import (
Agent,
FinalMetrics,
@@ -33,6 +30,9 @@ from langsmith import trace
from deepagents_harbor.backend import HarborSandbox
from deepagents_harbor.tracing import create_example_id_from_instruction
# .env 파일이 있으면 로드합니다.
load_dotenv()
SYSTEM_MESSAGE = """
You are an autonomous agent executing tasks in a sandboxed environment. Follow these instructions carefully.
@@ -53,9 +53,9 @@ Your current working directory is:
class DeepAgentsWrapper(BaseAgent):
"""Harbor agent implementation using LangChain DeepAgents.
"""LangChain DeepAgents를 이용한 Harbor 에이전트 구현체입니다.
Wraps DeepAgents to execute tasks in Harbor environments.
Harbor 환경에서 DeepAgents로 작업을 실행할 수 있도록 감쌉니다.
"""
def __init__(
@@ -68,7 +68,7 @@ class DeepAgentsWrapper(BaseAgent):
*args,
**kwargs,
) -> None:
"""Initialize DeepAgentsWrapper.
"""DeepAgentsWrapper를 초기화합니다.
Args:
logs_dir: Directory for storing logs
@@ -99,7 +99,7 @@ class DeepAgentsWrapper(BaseAgent):
return "deepagent-harbor"
async def setup(self, environment: BaseEnvironment) -> None:
"""Setup the agent with the given environment.
"""주어진 environment로 에이전트를 설정합니다.
Args:
environment: Harbor environment (Docker, Modal, etc.)
@@ -107,11 +107,11 @@ class DeepAgentsWrapper(BaseAgent):
pass
def version(self) -> str | None:
"""The version of the agent."""
"""에이전트 버전을 반환합니다."""
return "0.0.1"
async def _get_formatted_system_prompt(self, backend: HarborSandbox) -> str:
"""Format the system prompt with current directory and file listing context.
"""현재 디렉토리/파일 목록 컨텍스트를 포함하도록 system prompt를 포맷팅합니다.
Args:
backend: Harbor sandbox backend to query for directory information
@@ -126,7 +126,6 @@ class DeepAgentsWrapper(BaseAgent):
# Get first 10 files
total_files = len(ls_info) if ls_info else 0
first_10_files = ls_info[:10] if ls_info else []
has_more = total_files > 10
# Build file listing header based on actual count
if total_files == 0:
@@ -157,7 +156,7 @@ class DeepAgentsWrapper(BaseAgent):
environment: BaseEnvironment,
context: AgentContext,
) -> None:
"""Execute the DeepAgent on the given instruction.
"""주어진 instruction으로 DeepAgent를 실행합니다.
Args:
instruction: The task to complete
@@ -259,7 +258,7 @@ class DeepAgentsWrapper(BaseAgent):
def _save_trajectory(
self, environment: BaseEnvironment, instruction: str, result: dict
) -> None:
"""Save current trajectory to logs directory."""
"""현재 trajectory logs 디렉토리에 저장합니다."""
# Track token usage and cost for this run
total_prompt_tokens = 0
total_completion_tokens = 0

View File

@@ -1,11 +1,11 @@
"""LangSmith integration for Harbor DeepAgents."""
"""Harbor DeepAgents용 LangSmith 연동 코드입니다."""
import hashlib
import uuid
def create_example_id_from_instruction(instruction: str, seed: int = 42) -> str:
"""Create a deterministic UUID from an instruction string.
"""instruction 문자열로부터 결정적인(deterministic) UUID를 생성합니다.
Normalizes the instruction by stripping whitespace and creating a
SHA-256 hash, then converting to a UUID for LangSmith compatibility.
@@ -17,16 +17,16 @@ def create_example_id_from_instruction(instruction: str, seed: int = 42) -> str:
Returns:
A UUID string generated from the hash of the normalized instruction
"""
# Normalize the instruction: strip leading/trailing whitespace
# instruction 정규화: 앞/뒤 공백 제거
normalized = instruction.strip()
# Prepend seed as bytes to the instruction for hashing
# 해싱을 위해 seed bytes로 변환해 instruction 앞에 붙입니다.
seeded_data = seed.to_bytes(8, byteorder="big") + normalized.encode("utf-8")
# Create SHA-256 hash of the seeded instruction
# seed가 포함된 instruction의 SHA-256 해시 생성
hash_bytes = hashlib.sha256(seeded_data).digest()
# Use first 16 bytes to create a UUID
# 앞 16바이트를 사용해 UUID 생성
example_uuid = uuid.UUID(bytes=hash_bytes[:16])
return str(example_uuid)

View File

@@ -1,5 +1,7 @@
#!/usr/bin/env python3
"""Analyze job trials from a jobs directory.
"""jobs 디렉토리의 trial 실행 결과를 분석합니다.
Analyze job trials from a jobs directory.
Scans through trial directories, extracts trajectory data and success metrics.
"""
@@ -783,7 +785,7 @@ async def main():
if output_file:
print(f" ✓ Analysis written to: {output_file}")
else:
print(f" ✗ Skipped (no trajectory or already completed)")
print(" ✗ Skipped (no trajectory or already completed)")
except Exception as e:
print(f" ✗ Error: {e}")

View File

@@ -1,5 +1,6 @@
#!/usr/bin/env python3
"""
"""Harbor용 LangSmith 연동 CLI입니다.
CLI for LangSmith integration with Harbor.
Provides commands for:
@@ -243,12 +244,12 @@ async def create_experiment_async(dataset_name: str, experiment_name: str | None
session_id = experiment_session["id"]
tenant_id = experiment_session["tenant_id"]
print(f"✓ Experiment created successfully!")
print("✓ Experiment created successfully!")
print(f" Session ID: {session_id}")
print(
f" View at: https://smith.langchain.com/o/{tenant_id}/datasets/{dataset_id}/compare?selectedSessions={session_id}"
)
print(f"\nTo run Harbor with this experiment, use:")
print("\nTo run Harbor with this experiment, use:")
print(f" LANGSMITH_EXPERIMENT={experiment_name} harbor run ...")
return session_id

View File

@@ -41,7 +41,10 @@ packages = ["research_agent"]
"*" = ["py.typed"]
[tool.ruff]
lint.select = [
extend-exclude = ["*.ipynb"]
[tool.ruff.lint]
select = [
"E", # pycodestyle
"F", # pyflakes
"I", # isort
@@ -50,14 +53,15 @@ lint.select = [
"T201",
"UP",
]
lint.ignore = ["UP006", "UP007", "UP035", "D417", "E501", "D101", "D102", "D103", "D105", "D107"]
[tool.ruff.lint.per-file-ignores]
"tests/*" = ["D", "UP", "T201"]
ignore = ["UP006", "UP007", "UP035", "D417", "E501", "D101", "D102", "D103", "D105", "D107"]
[tool.ruff.lint.pydocstyle]
convention = "google"
[tool.ruff.lint.per-file-ignores]
"tests/*" = ["D", "UP", "T201"]
"skills/**/*.py" = ["T201"]
[tool.pytest.ini_options]
markers = [
"integration: 실제 외부 서비스(Docker, API 등)를 사용하는 통합 테스트",