Skip to content

Releases: lukilabs/craft-agents-oss

v0.8.13

29 Apr 08:16

Choose a tag to compare

v0.8.13 — Queued-message clarity, DeepSeek tool-call recovery, automation reasoning overrides

Features

  • Automation prompt actions can set a per-action thinking levelautomations.json prompt actions now accept thinkingLevel, matching the existing per-action llmConnection and model overrides. The value is validated in the shared schema, propagated through pending prompts and the "Run test" RPC path, and passed to spawned automation sessions with the same workspace-default fallback semantics as manual sessions. Persisted legacy "think" values are migrated instead of breaking config parsing. (3f8a7d13)
  • Automation action rows now show model-selection badges — The Automation Info page surfaces configured connection, model, and thinking-level overrides as compact badges under each prompt action, so users can audit which automation will run on which model without opening automations.json. (78d08edd)

Improvements

  • Queued mid-stream messages are visibly acknowledged without glowing bubbles — When a user sends while an agent is already responding, the optimistic user bubble now keeps an inline pulsing Queued indicator for at least 2.5 seconds, including Pi/DeepSeek/OpenAI-compatible sessions whose backend acknowledgement arrives almost immediately. Pending user bubbles no longer use the global shimmer overlay; the queued chip is the only transient visual state. The final implementation keeps the optimistic React key stable so the timer is not reset by the server's canonical message id. Fixes #616. (eb81086e, 0d9ca6b5, 083e6f90, 67353f42, fee4c2d1)
  • OpenAI-compatible tool-call failures now produce actionable diagnostics — The unified network interceptor validates outgoing Chat Completions and Responses API bodies before they hit the provider, enriches opaque empty-body 400 responses with sanitized request summaries, and reports malformed histories with structured error details instead of leaving sessions stuck on provider-specific messages. Partially addresses #612. (ab3c8eac)
  • Interceptor development now uses live source in monorepo runs — Non-packaged Pi subprocesses preload packages/shared/src/unified-network-interceptor.ts directly instead of a stale built bundle, so interceptor changes take effect after a subprocess restart during development. A new CRAFT_DEBUG_SSE_RAW=1 toggle can dump raw OpenAI-compatible SSE lines to interceptor.log when diagnosing relay behavior. (13d13635)

Bug Fixes

  • DeepSeek and OpenAI-compatible tool calls no longer corrupt replay history with duplicate or missing ids — Streaming reassembly now deduplicates relays that repeat tool_call.id, handles providers that omit tc.index on argument deltas, and consolidates DeepSeek's two-phase id/name + shifted-args stream into one logical tool-call event per call. Responses API replay now synthesizes deterministic missing call_id values and drops orphan outputs. Already-poisoned histories are sanitized on replay by removing empty-id tool calls and orphan tool results, so affected sessions can recover without manual JSONL edits. Fixes #613, #621, and #602. (ab3c8eac, 13d13635)
  • Queued messages stay in chat during silent redirects — The renderer now distinguishes explicit user stops from silent backend redirects: queued bubbles are only restored to the input on an intentional stop, while background redirects leave the bubble in chat for backend replay. sendMessage now acknowledges persistence with { accepted, messageId }, queued replay failures emit a typed retryable error, and queue boundary logs make support triage easier. Fixes #616. (eb81086e)
  • Pi/Codex mini-completions avoid unsupported codex-mini variants under ChatGPT-account auth — Title generation, summarization, and mini completions now filter the whole codex-mini family at selection time for openai-codex ChatGPT-account connections, preventing the "not supported when using Codex with a ChatGPT account" failure before a query is attempted. (95618531)
  • Pi built-in tools accept Craft UI metadata safely — Strict Pi tool schemas now allow Craft's root-level _displayName and _intent fields at the adapter boundary, then strip them before invoking the upstream tool implementation. This preserves the richer tool-call UI metadata without tripping schema validation on built-in tools. (16d103d2)
  • Attachments clear after sending — The composer now clears staged attachments after a message is sent instead of leaving previously attached files ready to resend accidentally. (70f9a80a)

Breaking Changes

  • None.

v0.8.12

24 Apr 22:09

Choose a tag to compare

v0.8.12 — GPT-5.5 + DeepSeek providers, Pi tool registration restored, composer & diff hardening

Features

  • GPT-5.5 is now the default for openai and openai-codex — Pi SDK 0.70.0 added gpt-5.5 to the OpenAI catalog, so PI_PREFERRED_DEFAULTS now picks it as the default model for both the openai and openai-codex auth providers instead of whatever the SDK returned first. New API-key connections and Craft Agents Backend (OpenAI) connections land on gpt-5.5 out of the box; existing connections keep their explicit model choice. Fixes #597.
  • DeepSeek is now a supported Pi-backed provider — Adds DeepSeek to PROVIDER_METADATA (dashboard URL), PI_PROVIDER_DISPLAY (label + placeholder), and PI_PREFERRED_DEFAULTS (deepseek-v4-pro / deepseek-v4-flash) so connections default to a modern model instead of whatever the Pi SDK returns first. The renderer picks up the new provider automatically via PI_AUTH_PROVIDER_DOMAINS (deepseek.com) for favicon resolution, the API-setup preset, and the settings page label. CLI gains a DEEPSEEK_API_KEY env key and extracts resolveApiKey, shouldSetupLlmConnection, and getProviderDisplayName as testable exports; --base-url auto-setup now works for non-anthropic providers and the validate step shares the same resolver path. Fixes #600.

Improvements

  • source_test now auto-enables and auto-restarts the turn so tools become callable immediately — Previously source_test only validated a source; users with a valid config but enabled: false had to flip the flag manually and restart the session, even though every check passed. The tool now flips enabled: true when needed and triggers the session's existing onSourceActivationRequest callback so the MCP/API servers are built and applied to the running agent. The follow-up fix (this release) routes the successful activation through the same source_activated + auto_retry machinery that already handled "tool not found on inactive source" errors: after activation, the current turn aborts cleanly and the renderer resends the user's original message with a [{slug} activated] suffix — giving the next query()/handlePrompt a fresh tool list with the new source live. This fixes a Claude-specific bug where source_test reported "tools available now" but the SDK had already frozen mcpServers at query-start, so mcp__{slug}__* tools were invisible to the model until the user typed another message. Pi behaves the same way for consistency (and also required a turn boundary — its subprocess only picks up new proxy tool defs on the next handlePrompt). Opt out with autoEnable: false to keep pure-validation behavior. No change to Codex or other backends without an activation callback — they still get the enabled flip and a clear "restart session to load tools" hint.
  • spawn_session accepts thinkingLevel — Agents can now set the reasoning level when delegating to a spawned session (off | low | medium | high | xhigh | max), instead of always inheriting the parent session's level or workspace default. Silently ignored on non-reasoning models (e.g. gpt-4o, gemini-2.5-flash): the Pi provider drivers and Claude SDK both gate the reasoning param on the model's capabilities, so passing thinkingLevel to a non-reasoning model is a safe no-op rather than an error. Also fixes a latent bug where createSession({ thinkingLevel }) in the session-manager API was silently ignored — the option is now honored with caller → workspace → global precedence, matching how permissionMode already worked. Fixes #462.
  • Real typecheck gate for pi-agent-server — The package's typecheck script was aliased to bun run build (bundler, not tsc), so API-shape drifts from the Pi SDK uplift slipped through CI (see the Pi-subprocess tool-registration fix below for the concrete regression that escaped). Added a dedicated tsc --noEmit -p tsconfig.typecheck.json step, wired into typecheck:all, plus ambient shims for turndown/pdfjs-dist/bash-parser so the new typecheck doesn't need @types packages. Fixed the cascade of pre-existing type drifts it surfaced (PiCredential vs AuthCredential, agent_end event shape, sdkTurnAnchor enrichment, CustomModelEntry at the dynamic-register call site, initConfig nullability in queryLlm closures, and an incorrect generic in web-fetch's result helper).

Bug Fixes

  • WebUI "Add New Label" no longer launches the desktop app — Typing #<new-label> in the WebUI chat input and clicking "Add New Label" previously opened a popover that, on submit, fired a craftagents://action/new-session deep link. The browser resolved that scheme by launching the Electron desktop app instead of creating the label in the browser. Root cause: the chat-input call site cherry-picked fields from the EditPopover config and dropped inlineExecution: true, falling back to the legacy same-window deep-link path (which happened to work inside Electron but broke across the WebUI ↔ OS boundary). Switched to a full config spread, matching how AppShell already invokes the same popover.
  • Attachments no longer leak between sessions (for real this time) — Attaching a file, switching sessions without sending, and switching back now restores the attachment in the original session across all four attach paths (file picker, OS drag-drop, clipboard paste, web drag). The first-pass fix assumed every attachment had a real OS path — true for Finder drag/OS picker, false for paste/web-drag where Chromium synthesises a File from a Blob with no disk origin — so draft refs fell back to filename-only values and failed to re-read on hydrate. The new persistence layer is hybrid: file-picker and OS-drag capture the absolute path via webUtils.getPathForFile (Electron 32+) and re-read on hydrate through a dedicated file:readUserAttachment RPC; paste and web-drag persist bytes inline in the draft (20 MB per-attachment cap — huge pastes log a warn and drop from the draft). Old 0.8.11-format drafts are rejected on load, so attachments saved by the previous broken release disappear once after upgrade instead of silently haunting the composer. Fixes #572.
  • Custom URL scheme links now open the right app — Clicking obsidian://, vscode://, zed://, notion://, slack://, and similar links in chat messages now dispatches to the OS protocol handler instead of being blocked (desktop) or rewritten to https://<host>/obsidian://... (WebUI). URL handling switched from a tight allowlist (http/https/mailto/craftdocs) to a blocklist of known-dangerous schemes (javascript:, data:, vbscript:, blob:, file:). The WebUI and Viewer now use an anchor-click fallback for non-http schemes so Chrome routes through the external-protocol dispatcher reliably. Fixes #590.
  • /compact no longer times out prematurely on GPT sessions — Manual compaction (including "Accept & compact" on a submitted plan) against Pi-backed OpenAI models failed after 60s because the subprocess RPC didn't leave room for GPT-5.4's long summary responses on large conversations. Bumped the timeout to 5 min — truly hung subprocesses are still caught by the stdio death watchdog. Claude sessions were unaffected (they use the SDK's native compact channel).
  • Pi subprocess tool registration restored — Pi SDK 0.70.0 quietly reshaped CreateAgentSessionOptions.tools from an array of tool objects into a string[] name allowlist. The subprocess kept passing AgentTool[], so at runtime allowedToolNames = new Set(objects) and .has(name) returned false for every lookup — every custom tool got filtered out by _refreshToolRegistry's allowlist guard, leaving the LLM with only the built-in [read, bash, edit, write]. Fix now routes tool objects through customTools: ToolDefinition[] plus a matching tools: string[] allowlist that includes every registered name, drops the private _baseToolsOverride + _buildRuntime defense-in-depth hack, and restores grep/find/ls that were last bundled pre-0.68 in the monolithic codingTools array. A regression test now locks the shape contract (every customTools[].nametools allowlist) so the next SDK uplift cannot silently drop tools again.
  • Pi call_llm honors the requested modelqueryLlm routed call_llm through the mini_completion RPC, which only carried prompt. Every call_llm silently ran on the connection's mini model (often the stale pi/gpt-5.1-codex-mini), ignoring both request.model and request.systemPrompt. Introduced a new llm_query RPC that carries the full LLMQueryRequest; the subprocess delegates verbatim to the model-aware queryLlm. PiAgent.queryLlm tracks pending queries in a map with cleanup on result / generic error / subprocess exit, and the event-adapter call_llm override now only fills in args.model when absent (never overwrites explicit values). A round-trip invariant test guards the full request envelope byte-for-byte. Fixes #596.
  • Pi mini completions pick a provider-appropriate modelhandleMiniCompletion was failing with "No API key found for openai." for users on ChatGPT Plus / openai-codex / google / github-copilot whenever the connection had no explicit miniModel. The provider-check fallback in queryLlm always assigned getDefaultSummarizationModel() (Haiku), which only resolves under anthropic auth — the Pi SDK 0.70.0 default then silently surfaced as an OpenAI model and the misleading auth error bubbled up. New pickProviderAppropriateMiniModel helper walks PI_PREFERRED_DEFAULTS[authProvider] for a resolvable, non-denied ...
Read more

v0.8.11

22 Apr 11:27

Choose a tag to compare

v0.8.11 — Messaging, Prompts, Chat & LLM Bug Fixes

Bug Fixes

  • WhatsApp selfChatMode now gates inbound symmetricallyclassifyInbound previously filtered only outbound (fromMe=true) traffic, so a contact DM or group message still routed to the bound session even with self-chat mode on — the agent could reply on the user's behalf in non-self-chats. Inbound from non-self-chats is now dropped when selfChatMode is on, with a new non_self_chat_inbound skip reason for log clarity. Back-compat preserved: with selfChatMode=false, inbound from contacts still emits as before.
  • call_llm recovers partial output instead of failing on SDK max turnsqueryLlm's SDK maxTurns was 1, so reasoning-model outputs that naturally span multiple SDK turns (even without tools) failed outright with "Reached maximum number of turns (1)". Bumped to 10, and the consumer is now defensive: captures partial assistant text + warning when the SDK yields an error-result or throws mid-stream instead of returning a bare failure. call_llm renders [Partial result — …] so callers see the signal. Added [queryLlm] debug logs (subtype, num_turns, stop_reason, errors) to close the observability gap. (Fixes lukilabs/craft-agents-oss#544)
  • Pi backend respects includeCoAuthoredBy: false — Pi sessions called getSystemPrompt() without the includeCoAuthoredBy argument, so it silently defaulted to true and rendered the Git Conventions block even when the user had disabled co-author attribution. Mirrors the earlier fix that only landed on the Claude path. getSystemPrompt also gained a defensive fallback: if a caller omits the arg, it resolves to the persisted getCoAuthorPreference() value instead of hard-coded true. (Fixes lukilabs/craft-agents-oss#576)
  • Follow-up quote sent to the agent in full — Large follow-up annotation quotes were silently truncated at ~280 characters before reaching the agent, because the shared normalizeExcerptForMessage helper's tooltip-tuned default cap applied on the agent-facing path. Agent path now uses normalizeFollowUpText (whitespace-collapse only, no length cap); the chip helper was renamed to truncateForChipTooltip and requires an explicit maxLength. A dead 140-char noteLabel pre-truncation was also removed. (Fixes lukilabs/craft-agents-oss#580)

Improvements

  • Chat follow-up helpers extracted and unit-testedformatFollowUpSection, truncateForChipTooltip, and normalizeFollowUpsMarkdown moved out of ChatDisplay.tsx into a sibling ChatDisplay.follow-ups.ts module with 13 targeted tests covering the >1000-char quote regression, canonical message shape, multi-follow-up numbering, the round-trip parser, and chip-tooltip behavior. Added a narrow @craft-agent/ui/annotations/follow-up-state subpath export so tests can import pure helpers without dragging in pdfjs + Vite-specific imports via the UI barrel.
  • queryLlm SDK-stream consumer extracted for testability — Moved into claude-llm-query.ts with 12 new tests covering thrown/yielded error paths, structured output, and the call_llm render block.
  • Pre-commit typecheck now auto-discovers TS workspacesscripts/typecheck-staged.sh previously hardcoded an 8-workspace allowlist, so staged TS changes in newer or renamed workspaces (messaging-*, apps/cli, apps/webui, etc.) silently skipped the hook. It now walks apps/<X>/ and packages/<X>/ and typechecks any workspace with a tsconfig.json, preferring bun run typecheck when defined. Surfaced and fixed four pre-existing TS errors that the old allowlist was hiding (apps/cli webhook typing, apps/marketing lib bump to ES2023 + card.codeHtml rename + indexed-access guards, apps/webui tsconfig alignment with Electron).
  • electron:dev now builds the WhatsApp worker on startupscripts/electron-dev.ts built the MCP servers and Pi agent server but not the WhatsApp worker, so fresh checkouts hit MODULE_NOT_FOUND on dist/worker.cjs the first time the user tried to connect WhatsApp — confusing because the Electron adapter reports it as a worker exit, not a missing-build error. Dev setup now shells out to the canonical scripts/build-wa-worker.ts (~70 ms) so the worker bundle stays in sync on every start.

Breaking Changes

  • None. No schema, IPC, or wire changes.

v0.8.10

21 Apr 13:13

Choose a tag to compare

v0.8.10 — Messaging Gateway, Opus 4.6 Restored & Extended-Context Fix

Features

  • Messaging Gateway — Telegram & WhatsApp — New workspace-scoped gateway lets you bind external chats (Telegram DMs, WhatsApp contacts) to Craft Agent sessions. Inbound messages drive the agent, and agent output is rendered back into the chat. Three response modes: progress (default — one evolving bubble), streaming (live edits), final_only (silent until complete). Telegram supports photo/document/voice/video/audio attachments (20 MB cap). WhatsApp runs in a subprocess worker built on Baileys, keeping its global state isolated from the main process. Includes a new Settings → Messaging V2 page with per-platform tiles, pairing dialogs, QR/code connect UI, and status atoms.
  • Opus 4.6 restored as a selectable model — Claude Opus 4.6 is back in the model picker alongside 4.7. Users on Tier 1–3 who hit "Invalid Request" on large projects with 4.7 can fall back to 4.6 without editing configs manually.
  • New xhigh thinking effort level — Adds an intermediate thinking effort between high and max.
  • Pi-agent stderr surfaced on LLM connection-test failures — Failed LLM connection tests now expose the underlying pi-agent stderr so misconfigurations (missing binary, auth issues) are debuggable without digging through logs.

Improvements

  • Thinking level enum derived from a single tupleThinkingLevel and its Zod schema now derive from one source-of-truth tuple, preventing drift between the type and the runtime validator.
  • Messaging UI polish — Toned-down dialog hint styling and primary connect buttons; renamed the WhatsApp "Forget Device" menu item to "Disconnect" for clarity.
  • WhatsApp self-chat polish — Self-chat replies are prefixed with 🤖 so the bot's messages are visually distinct from your own; LID (long-ID) contacts are now supported; build provenance is embedded so we can verify the bundled worker at runtime.
  • Docker build robustness — Raised Node heap to 4 GB for the webui Vite build to avoid OOM on slim CI images.
  • WhatsApp worker CI verification — The WhatsApp worker bundle is now built in CI and verified in release artifacts so broken bundles can't ship.

Bug Fixes

  • Extended Context (1M) is now opt-inenable1MContext previously defaulted to true, causing 400 "Invalid Request" errors on large projects for direct-API users on Tier 1–3 (the context-1m-2025-08-07 beta requires Tier 4+). New installs default to off; users opt in via AI Settings → Performance → Extended Context (1M). Existing persisted values are preserved. The invalid_request error path now specializes on 1M-context hints and shows a "1M Context Not Available" message with a Settings action instead of the misleading "remove attachments" fallback. (Fixes #567)
  • spawn_session now expands ~ in workingDirectory — Previously the tilde was passed through literally to the child SDK, producing a misleading "cli.js not found" error. (Fixes #575)
  • set_session_labels now accepts valued labels (id::value) — The resolver was comparing each input as an atomic ID, so "priority::3" or "parent-task::TASK-123" always fell through to "unknown label" even when the base label was configured with a valueType. Valued inputs are now parsed before matching; values are also checked against the declared type (number, date, string), and the rejection message explains the exact reason per-entry. (Fixes #566)
  • Google default models reordered — Connection test now picks a stable default model instead of one that may be rate-limited or deprecated.
  • Pi-agent codex-mini refusal recoverypi-agent-server now recovers gracefully when codex-mini refuses a request on ChatGPT-auth Codex, instead of leaving the session in a broken state.
  • Annotations: follow-up island close deferred past enter-animation grace — Fixes a race where the follow-up island closed before its enter animation completed, causing visual jank.
  • Opus 4.6 restore wire format — Opus 4.6 is now pushed as a ModelDefinition object (not a bare string) so the connection-list consumers type-check correctly.

Breaking Changes

  • None.

v0.8.9

16 Apr 17:33

Choose a tag to compare

v0.8.9 — Opus 4.7 Default

Features

  • Claude Opus 4.7 as the new default — Anthropic released Claude Opus 4.7 on April 16, 2026, and it is now the default Opus model across the app. Existing user configs are automatically migrated from 4.6 to 4.7 on startup. Opus 4.6 remains available, and Bedrock entries for it are retained for backward compatibility.

Improvements

  • Claude Agent SDK upgraded to 0.2.111 — Brings native Opus 4.7 support plus new SDK capabilities: mcp_set_servers with per-tool permission_policy, the public startup() / WarmQuery API, and a new system/memory_recall event. Includes upstream security updates (@anthropic-ai/sdk bumped to ^0.81.0).

Bug Fixes

  • None.

Breaking Changes

  • None.

v0.8.8

16 Apr 12:34

Choose a tag to compare

v0.8.8 — Local Model Fixes, Retry Button & Inter-Session Messaging

Features

  • Inter-session messaging — New send_agent_message tool enables sessions to send messages to other active sessions, allowing cross-session coordination and agent-to-agent workflows.

Improvements

  • Chinese (zh-Hans) translations — Updated and corrected Simplified Chinese translations across the UI.
  • Local model documentation — Clarified the /v1 path requirement for local model endpoints.

Bug Fixes

  • Local model setup simplified — Local model setup no longer asks for an API key when one isn't needed; a placeholder key is passed automatically for endpoints that don't require authentication. Model display in the selector is also fixed.
  • Retry button — The retry button now correctly resends the last user message and properly manages chat input focus.
  • Duplicate ConfigWatcher on headless server — Fixed duplicate recursive ConfigWatchers on headless Bun/Linux servers that could cause intermittent hangs after session activity. (Fixes #538)
  • Thinking level handlerSET_DEFAULT_THINKING_LEVEL IPC handler now correctly returns a success response.

Breaking Changes

  • None.

v0.8.7

14 Apr 14:36

Choose a tag to compare

v0.8.7 — Localization, Bedrock Fixes & API Token Refresh

Features

  • Hungarian, German, and Polish translations — Three new languages added with full coverage of UI strings across the shell, session info popover, files section, rich text input, and island components. Polish includes proper plural form support.
  • API token refresh endpoint — API-type sources can now specify an optional renewEndpoint for automatic token refresh without requiring full OAuth. Useful for APIs that issue short-lived tokens with a dedicated renewal mechanism.

Improvements

  • Raw body support for API sources — API source proxy now correctly handles _rawBody and _contentType parameters, enabling sources that need to send non-JSON payloads (e.g., form-encoded, XML).

Bug Fixes

  • Bedrock model defaults and region awareness — Corrected the default model list for Bedrock connections and added region-aware inference profile resolution, fixing issues where initial model selections failed with "model identifier is invalid" errors. Bare Claude model IDs without the required us. inference profile prefix are now filtered out automatically. (Fixes #536, partially addresses #528)
  • Model dropdown scrolling — Dropdown sub-menus (e.g., model selector) are now scrollable when the list exceeds the viewport height. (Fixes #527)
  • Workspace file access — Files under the workspace working directory can now be opened correctly, fixing "cannot open code file" errors. (Fixes #526)
  • Server lock released on quit — The server lock file is now properly released when the app quits, with hardened stale lock detection to prevent "another instance running" false positives on crash recovery.
  • Stale reconnect session refresh — Moved the stale reconnect session refresh into a transactional atom action, preventing race conditions that could cause message loss after sleep/wake reconnection.

Breaking Changes

  • None.

v0.8.6

11 Apr 07:10

Choose a tag to compare

v0.8.6 — Chunked Session Transfers & Custom Endpoint Image Support

Features

  • Chunked session transfers — Large sessions can now be transferred between workspaces via chunked WebSocket RPC with base64 encoding, SHA-256 checksums, and per-chunk retry. Avoids WebSocket message size limits that previously caused transfers to fail silently. Transfer progress is shown as a purple LED border animation on the Send button, with normalized progress tracking across multi-session batch operations.
  • Image input for custom endpoints — Custom endpoint models (e.g. Gemma 4 via Ollama) can now receive image attachments. Previously, images were silently discarded because custom models hardcoded text-only input. Important: custom endpoints remain text-only by default — you must set supportsImages: true in the connection config to enable image input. See the custom endpoint documentation for per-model and whole-endpoint examples. (Fixes #525)

Bug Fixes

  • Session transfer reconnect hardened — Improved error recovery during workspace transfers with independent base64 chunk decoding, larger chunk sizes (2MB), and robust reconnect handling.
  • Transfer used wrong workspace ID — Session export now correctly uses the source workspace ID instead of the target, fixing incorrect routing during transfers.
  • Credential prompts in inline chat — Credential and permission prompts now work correctly inside the EditPopover inline chat.
  • Message loss after sleep/wake — Additional fix for messages disappearing during stale WebSocket reconnect after PC sleep/wake.
  • Send button layout — Restored proper spacing between Cancel and Send buttons to match dialog footer patterns.

Breaking Changes

  • None.

v0.8.5

09 Apr 22:53

Choose a tag to compare

v0.8.5 — Multi-Language Support & Pi SDK Upgrade

Features

  • Multi-language support (i18n) — The entire UI is now localized. Ships with English, Spanish (es), Simplified Chinese (zh-Hans), and Japanese (ja) — over 1,050 translated strings covering every page, menu, toast, tooltip, and dialog. Switch languages in Settings > Appearance. Session titles and AI responses also follow the selected language.
  • Pi SDK upgrade (0.56.2 → 0.66.1) — Major upgrade spanning multiple upstream releases. Models like GLM 5, GLM 5.1, and Minimax 2.7 should now work thanks to upstream fixes. Also picks up: Bedrock throttling no longer misidentified as context overflow, Anthropic HTTP 413 detection for compaction/retry, Z.ai tool streaming support, OpenAI streaming fixes, and bash output truncation fix. (Fixes #503, addresses #513)

Improvements

  • Canonical locale registry — Adding a new language is now a single-file change. The registry auto-derives all language codes, display names, i18n resources, and date-fns locales.
  • Unified language setting — Removed the duplicate free-text "Language" field from Preferences. The Appearance language dropdown is now the single source of truth for both UI language and AI response language.
  • i18n developer tooling — Pre-commit hook catches hardcoded English strings in staged .tsx files. Locale parity test ensures all translations stay in sync. localize-agents skill automates adding new languages.

Known Limitations

  • Headless server responses are in English — When using the WebUI connected to a remote headless server, the UI is fully localized (based on browser language), but the AI agent still responds in English. Session titles are also generated in English. Per-client language support for the headless server will be added in a future release.

Breaking Changes

  • The free-text "Language" field in Preferences has been removed. Use the language dropdown in Settings > Appearance instead. Existing preferences.json files with a language field are safely ignored.

v0.8.4

08 Apr 21:48

Choose a tag to compare

v0.8.4 — Generic OAuth for Sources, Send to Workspace & DevTools

Features

  • Generic OAuth for API sources — API-based sources can now use standard OAuth authentication. Includes automatic endpoint discovery via RFC 9728 (OAuth Authorization Server Metadata), so you only need to provide the issuer URL and the rest is resolved automatically.
  • Send to Workspace — New action to share sources, skills, and automations across workspaces with a single click. Supports both individual and batch operations for moving multiple resources at once.
  • DevTools in packaged app — Developer Tools are now accessible in production builds via View > Toggle Developer Tools, making it easier to debug issues without a dev build.

Improvements

  • OAuth documentation — Added guides for generic OAuth setup and auto-discovery to online docs and the GitHub source guide.

Bug Fixes

  • Session tools unavailable via Pi — Session self-management tools (set_session_labels, set_session_status, etc.) now work correctly when routed through the Pi agent path. (Fixes #511)
  • Screen stays awake permanently — Fixed the screen staying awake even after disabling the Keep Screen Awake setting. (Partially addresses #415)
  • Remote routing payload translationworkspaceId inside nested payload objects is now correctly translated for remote workspace routing.
  • Automations loading on WebUI — Automations are now loaded via server-side RPC instead of direct file reads, fixing failures on remote/Docker setups.
  • Error card buttons broken — Settings and Retry buttons in error cards are now properly wired up and functional.
  • Model tier hints hardcoded to Anthropic — The model selector popover now uses provider-aware tier hints instead of Anthropic-specific aliases, fixing display issues for non-Anthropic providers.
  • OAuth token retrieval for API sources — Fixed getToken not being wired for generic OAuth API sources in SessionManager.
  • OAuth error messages — Improved error message when the api.oauth config block is missing from a source.
  • Linux resource import — ConfigWatcher is now notified after resource imports, fixing Linux fs.watch compatibility where imported resources wouldn't appear until restart.
  • Release notes on Docker/remote — Release notes now fall back to ~/.craft-agent/release-notes/ when the Electron resources path is unavailable.
  • WebUI About page — About dialog now shows the server version, hides the irrelevant "Check Now" button, and "What's New" works correctly on WebUI.

Breaking Changes

  • None.