WinClaw Local API

WinClaw exposes a local HTTP API on 127.0.0.1:9199 for external tools and scripts to submit tasks, stream AI responses, and integrate with the WinClaw Agent programmatically.

WinClaw runs a local HTTP service on 127.0.0.1:9199 that accepts task submissions from external programs. This enables CLI tools, scripts, and automation workflows to leverage WinClaw's AI capabilities — including the PC Manager Agent and RAG knowledge bases — all through a simple HTTP interface.

A typical use case: the agent_cron scheduler uses this API to submit prompts to WinClaw on a cron schedule, enabling fully automated AI-powered workflows.

API Endpoints

Submit and Stream

  • URL: POST http://127.0.0.1:9199/api/external/tasks/stream
  • Content-Type: application/json
  • Response: text/event-stream (SSE)

Cancel a Running Task

  • URL: POST http://127.0.0.1:9199/api/external/tasks/cancel
  • Content-Type: application/json
  • Body: { "request_id": "..." }

Request Schema

{
  "engine": "pc_manager | rag",
  "config_id": "string",
  "prompt": "string",
  "topic_id": "optional string",
  "conversation_id": "optional string",
  "conversation_title": "optional string",
  "model_override": "optional string",
  "request_id": "optional string",
  "metadata": { "optional": "object" },
  "include_history": true,
  "temperature": 0.7,
  "max_tokens": 2000
}

Fields

FieldRequiredDefaultDescription
promptYesThe task instruction to send to the AI
engineNopc_managerWhich engine to use (see Supported Engines)
config_idNodefaultConfiguration identifier (engine-specific)
topic_idNoauto-createdTopic ID for session grouping
conversation_idNoauto-createdConversation ID for message threading
conversation_titleNoHuman-readable title for the conversation
request_idNoauto-generated UUIDUnique request identifier for cancellation
metadataNoArbitrary key-value metadata attached to the task
include_historyNotrueWhether to inject conversation history (for rag engine)
temperatureNoModel temperature parameter
max_tokensNoMaximum tokens for model response
model_overrideNoOverride the default model

Supported Engines

EngineDescriptionconfig_id
pc_managerPC Manager Agent — can execute commands, read/write files, search code, browse the web, and moreAny value, e.g. default
ragRAG retrieval-augmented generation — answers questions based on a knowledge baseThe corresponding rag_id

SSE Event Stream

The response is a Server-Sent Events (SSE) stream. Events are delivered in this order:

Event TypeDescriptionKey Fields
taskFirst event, contains session identifierstopic_id, conversation_id, user_message_id, request_id
connectionConnection establishedstatus
contentIncremental response textcontent
reasoningModel reasoning steps (if applicable)content
tool_callTool invocation by the agentTool details
tool_resultResult of a tool invocationResult details
doneTask completed successfullyassistant_message_id, content (full response)
errorTask failedmessage

Example SSE Output

event: task
data: {"type": "task", "topic_id": "35978dd8-...", "conversation_id": "6ab5a646-...", "user_message_id": "3e613f07-...", "request_id": "b256c878-..."}

event: connection
data: {"type": "connection", "status": "connected"}

data: {"type": "content", "content": "I"}
data: {"type": "content", "content": " am"}
data: {"type": "content", "content": " a professional"}
data: {"type": "content", "content": " AI"}

data: {"type": "done", "content": "Full response text...", "assistant_message_id": "xxx"}

History Persistence

External tasks are automatically saved into WinClaw's conversation history:

  1. User message is saved immediately (role=user)
  2. Model events are streamed to the client
  3. On done, the assistant message is saved (role=assistant)
  4. On error, an error message is saved (status=error in metadata)

This means tasks submitted via the API appear in WinClaw's homepage session history, just like tasks initiated through the UI.

Quick Examples

cURL — Stream a Task (pc_manager)

curl -N -X POST http://127.0.0.1:9199/api/external/tasks/stream \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "List my current directory structure",
    "engine": "pc_manager",
    "config_id": "default"
  }'

cURL — With Custom Metadata and Title

curl -N -X POST http://127.0.0.1:9199/api/external/tasks/stream \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "Summarize the current project structure",
    "engine": "pc_manager",
    "config_id": "default",
    "conversation_title": "Project Structure Summary",
    "metadata": {"source": "external-client", "traceId": "t-001"}
  }'

cURL — Query a Knowledge Base (RAG)

curl -N -X POST http://127.0.0.1:9199/api/external/tasks/stream \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "How does the project handle user authentication?",
    "engine": "rag",
    "config_id": "<your-rag-id>",
    "include_history": true
  }'

cURL — Continue an Existing Conversation

curl -N -X POST http://127.0.0.1:9199/api/external/tasks/stream \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "Continue the topic above, explain in more detail",
    "engine": "pc_manager",
    "config_id": "default",
    "topic_id": "<from-task-event>",
    "conversation_id": "<from-task-event>"
  }'

cURL — Cancel a Running Task

curl -X POST http://127.0.0.1:9199/api/external/tasks/cancel \
  -H "Content-Type: application/json" \
  -d '{"request_id": "<from-task-event>"}'

Response:

{"success": true, "request_id": "xxx", "message": "Task cancelled"}

PowerShell

$body = @{
  engine = "pc_manager"
  config_id = "default"
  prompt = "Summarize the current project structure"
  metadata = @{ source = "external-client"; traceId = "t-001" }
} | ConvertTo-Json -Depth 5

Invoke-WebRequest `
  -Uri "http://127.0.0.1:9199/api/external/tasks/stream" `
  -Method Post `
  -ContentType "application/json" `
  -Body $body

Python

import requests
import json

resp = requests.post(
    "http://127.0.0.1:9199/api/external/tasks/stream",
    json={
        "prompt": "Hello, briefly introduce yourself",
        "engine": "pc_manager",
        "config_id": "default",
    },
    stream=True,
)

for line in resp.iter_lines(decode_unicode=True):
    if not line:
        continue
    if line.startswith("data: "):
        data = json.loads(line[6:])
        evt_type = data.get("type")
        if evt_type == "task":
            print(f"Task started: request_id={data['request_id']}")
        elif evt_type == "content":
            print(data["content"], end="", flush=True)
        elif evt_type == "done":
            print(f"\n\nDone: assistant_message_id={data.get('assistant_message_id')}")
        elif evt_type == "error":
            print(f"\nError: {data['error']}")

Integration with CLI Tools

Environment Variable

CLI tools that call this API should respect the WINCLAW_LOCAL_BASE_URL environment variable:

func resolveBaseURL() string {
    if v := os.Getenv("WINCLAW_LOCAL_BASE_URL"); v != "" {
        return strings.TrimRight(v, "/")
    }
    return "http://127.0.0.1:9199"
}

This allows the API endpoint to be customized when WinClaw runs on a different port.

Example: agent_cron External Task

The agent_cron tool uses this API to submit scheduled AI tasks. When a cron job of type external_task fires, the daemon:

  1. Builds the request body from job configuration (prompt, engine, config_id, metadata)
  2. POSTs to /api/external/tasks/stream
  3. Parses the SSE stream to extract request_id, content, and assistant_message_id
  4. Records the results in a run record for later inspection via agent_cron job logs
# Add a scheduled external task
agent_cron job add --type external_task \
  --name "morning-report" \
  --cron "0 0 9 * * 1-5" \
  --prompt "Summarize yesterday's work and list today's plan"

# The daemon will POST to WinClaw's local API every weekday at 9 AM

Integration Workflow

For tools building their own integration with this API:

  1. POST /api/external/tasks/stream with your prompt
  2. Parse SSE lines (event: + data:)
  3. Read the first task event and cache the IDs (request_id, topic_id, conversation_id)
  4. Render content / reasoning / tool_* events incrementally
  5. On done, store assistant_message_id for traceability
  6. If needed, call /api/external/tasks/cancel with request_id