# cnvs.app ## ⚠️ Quick start for AI agents (read this first) **Board URLs contain a `#` fragment which is CLIENT-SIDE routing only — the server never sees it.** If you're using curl, a headless browser, or any HTTP client: `https://cnvs.app/#` looks like `/` to the server and returns the app shell, NOT the board. This trips up most agents on their first try. Correct flow: 1. **Get the board id** — strip it from after the `#` in the URL, or from a shared snippet. Ids look like `ad53466d-e55e-4886-924d-5dcf861c25c1`. 2. **Read the board** → `GET /json/` (JSON snapshot, ETag-aware) **AND** `GET /svg-preview/` (SVG render — do both on strokes / images, because JSON is numbers and shapes live in the preview). 3. **React to live changes** → install the [`cnvs-whiteboard`](https://cnvs.app/cnvs-whiteboard/SKILL.md) + [`mcp-listen`](https://cnvs.app/mcp-listen/SKILL.md) Agent Skills (they push MCP notifications as in-chat wake-ups). REST fallback without the skills: `GET /api/boards//wait?timeout_ms=25000` long-poll. 4. **Write** → `POST /api/boards//{texts|links|strokes|images}` (and `/move`, `DELETE`). REST is universal — every MCP tool has a 1:1 REST mirror, and REST writes are simpler and don't require an MCP client. 5. **Create a fresh board** → `POST /api/boards` → returns `{id}`. If you want the canonical collaborator guide as an Agent Skill, read [`/cnvs-whiteboard/SKILL.md`](https://cnvs.app/cnvs-whiteboard/SKILL.md). --- > cnvs.app is a free, no-signup, real-time collaborative whiteboard in the browser. Open a URL, share it, draw and write together instantly. No accounts, no teams, no paywall. Ships a Model Context Protocol (MCP) endpoint so AI assistants can collaborate on the same live board as human users. cnvs.app is built for the simplest possible use case: two or more people on a call who want to sketch, write or brainstorm on the same surface. You open the site, a board is created, you copy the URL, you share it. Anyone who opens the URL joins the same canvas live. There is no signup, no login, no team setup, no board limit, no session token — just a URL and an infinite sheet of paper. The project is positioned as a minimalist alternative to Miro, FigJam, MURAL, Excalidraw and tldraw. Miro / FigJam / MURAL require accounts and paid plans for team use. Excalidraw's free tier is limited to a single scene. tldraw.com requires every participant to sign in before they can join a shared room. cnvs.app requires none of that. The stack is a Cloudflare Worker with Durable Objects for WebSocket-based real-time sync. Data is deleted permanently when a board is erased or after 30 days of inactivity. No AI training on user content, no activity logging, no ads, no trackers. ## Pages - [Home](https://cnvs.app/): Opens a new board and gives you a shareable URL immediately. - [About](https://cnvs.app/about): Full product description, feature list, competitor comparison, pricing, FAQ, MCP docs. ## Colors — named only (no custom hex from API / MCP) Think of it like clicking an ink-picker button in the UI: you pick a *name*, not a hex value. The REST API and MCP tools accept exactly five color names (case-insensitive); anything else — including a valid hex like `#ff3b30` — silently clamps to `auto` / theme-aware ink. This guarantees AI writes stay visible on both themes and look like something a human-UI user could produce. **Don't push custom hex codes** through `color`; they're normalised away. Accepted values on `texts.color`, `strokes.color`, and the move / link endpoints: | input (case-insensitive) | resolves to | meaning | |---|---|---| | `auto` / `""` / `black` / omitted / null | `var(--text-color)` | theme-aware ink (default). `black` is an alias for auto so it adapts to dark mode instead of painting black-on-black. | | `red` | `#ff3b30` | red, for emphasis | | `blue` | `#007aff` | blue | | `green` | `#34c759` | green | | anything else (custom hex, named HTML colors, etc.) | `var(--text-color)` | silently clamped — treat as a no-op | Same rule across every transport — REST, MCP and the browser WebSocket all accept only these five names. Older cached browser clients that still send hex silently clamp to auto on their next draw until they reload. ## Coordinate system + vocabulary - **Infinite 2D canvas in CSS pixels.** `+x` goes right, `+y` goes DOWN (standard SVG / web, NOT mathematical). Origin is `(0, 0)` top-left; there are no negative caps but a typical board sits inside roughly `(0, 0) - (2000, 2000)` before panning starts feeling stretched. Spawn new items anywhere in that range; use `get_preview` / `GET /svg-preview/` to find empty space before placing. - **Dimensions.** Text nodes auto-size to content between ~200–400 px wide and ~40–80 px per short line; pass explicit `width` (160–4096) to force wrapping. Images render at the `width` / `height` you post — scale them to look good next to text (don't post a 2000×2000 image next to 40-px stickers). Strokes don't have a width/height; they're a freehand list of `[x, y]` world points. - **Gaps.** Leave 20–40 px of padding between adjacent nodes — otherwise the browser UI's drag handles overlap. - **`lines` vs `strokes`.** The JSON snapshot at `/json/` returns `lines[]` (matches the DB table `wbrd_lines`), while the action endpoint is `POST /api/boards//strokes` (matches the user action "draw a stroke"). Both names refer to the same entity. The REST path also accepts `/api/boards//lines` as a no-op alias if you prefer the JSON-key spelling. ## Recommended AI workflow (listen via MCP, act via REST) For AI agents that want to be a *live collaborator* on a board (react to human edits in real time, not just one-shot "go do X"), the empirically fastest loop is **hybrid — MCP for listening, REST for writing**: 1. **Subscribe via MCP** to `cnvs://board//state.json`. The server pushes `notifications/resources/updated` over SSE within ~3 s of every edit. This is the only way to get a push (no REST webhook). In Claude Code the ready-made [`mcp-listen`](https://cnvs.app/mcp-listen/SKILL.md) skill (wired up by the primary [`cnvs-whiteboard`](https://cnvs.app/cnvs-whiteboard/SKILL.md) skill) wraps this into `Monitor` so each push becomes an in-chat notification that re-invokes the model. 2. **React via REST** to `POST /api/boards//{texts,strokes,images,links}` (or `/move`, `DELETE`). Simpler to compose than threading through an MCP tool call, no per-request session management, same validator/broadcast path server-side. Every mutation is still visible to all MCP subscribers ~3 s later. 3. **Filter self-echoes**: the skill's `--ignore-author-prefix "ai:"` suppresses notifications caused by any `ai:*` author (including your own writes), so your REST mutations don't wake your own listener. **Always fetch `/svg-preview/` (not just `/json/`) when the triggering item is a `line` or an `image`.** The JSON snapshot carries numbers (point arrays, bounding boxes, image dimensions); a multimodal model reading those numbers can tell "47-point red stroke in bbox (323,1771)-(585,2066)" but has no idea it's a **heart**, a signature, a lightning bolt or illegible scribble. The human drew something *for you to see* — seeing only coordinates is functionally blindness. One extra `GET /svg-preview/` per non-trivial edit is cheap and closes this gap. For `kind: "text"` without Mermaid, JSON `content` is already enough. Why hybrid beats pure-MCP: - **Universality**: REST works for any client with outbound HTTP — no install, no SDK, no configuration. MCP requires installing a client (Claude Desktop, Claude Code, Cline, Goose, custom agent, …) and its session management. Every AI agent that can shell out or `fetch()` can use the REST surface. - **Listening**: MCP push is the only option for real-time — REST has no webhook, only `GET /api/boards//wait` long-poll (useful as fallback when MCP isn't available in the client). - **Writing**: REST avoids the JSON-RPC envelope, session header bookkeeping and tool-call round-trip the MCP SDK adds. For an agent constructing payloads from natural language, `curl -X POST ... -d '{json}'` is cognitively cheaper and fully stateless. - **No tool-call slots wasted**: MCP tool calls consume one of the model's per-turn tool-call slots. REST via shell doesn't. You CAN use pure MCP (tool calls for writes) and it works, but it's slower end-to-end per cycle and requires MCP-capable client tooling. The REST endpoints mirror every MCP tool 1:1 — `add_text` ↔ `POST /texts`, `move` ↔ `POST /{kind}/{id}/move`, `erase` ↔ `DELETE /{kind}/{id}`, etc. Pick the transport that fits your agent's runtime; the HYBRID pattern (MCP-listen + REST-write) is strictly best when both are available. ## HTTP API (for AI agents that cannot speak MCP) If your runtime can't load MCP servers, cnvs.app exposes HTTP endpoints that cover reads AND mutations. Every mutation goes through the same server-side validator as MCP and WebSocket mutations and is broadcast live to all connected browsers within ~100 ms. ### Reads - `GET /json/{boardId}` — full structured JSON snapshot (same shape as MCP `get_board`; includes Mermaid source inside text `content`; each text node carries a `kind: "text" | "link"` field). Also supports `?board=`. **Every response carries a weak `ETag`** — send it back as `If-None-Match` on follow-up polls and unchanged boards respond `304 Not Modified` (zero-body, no rate-limit burn). - `HEAD /json/{boardId}` — same headers as `GET` (including `ETag`), zero body. Cheap existence / freshness probe. - `GET /svg-preview/{boardId}` — compact schematic SVG preview (same as MCP `get_preview`; a few kB of plain SVG, consumable as an image by multimodal LLMs). Also supports `?board=`. - `GET /api/boards/{boardId}` — browser-shape snapshot (images include full base64 payload — heavier than `/json`). - `GET /api/boards/{boardId}/wait?timeout_ms=25000` — **long-poll**: blocks until the next debounced edit burst, or timeout. REST equivalent of MCP `wait_for_update`. Returns `{updated, timedOut, etag}`. Use this instead of polling `/json` in a tight loop — zero wasted requests. ### Boards - `POST /api/boards` — create a new board, returns `{id}`. - `DELETE /api/boards/{boardId}` — soft-delete the board (ID tombstoned 30 days). ### Mutations (mirror MCP tools) - `POST /api/boards/{boardId}/texts` — create / update a text node. Body: `{ id?, x, y, content, color?, width?, postit?, author? }`. Content supports cnvs markup and Mermaid (one diagram per node — entire content must be one ```mermaid fenced block). Response includes `kind: "text"`. - `POST /api/boards/{boardId}/strokes` — draw a freehand stroke. Body: `{ id?, points, color?, author? }`. `points` accepts nested `[[x,y],...]`, flat `[x1,y1,...]`, or a JSON string of either. - `POST /api/boards/{boardId}/images` — paste an image. Body: `{ id?, x, y, width, height, dataUrl, thumbDataUrl?, author? }`. `dataUrl` MUST be a `data:image/(png|jpeg|gif|webp|svg+xml);base64,...` string, ≤ ~900 kB. Hosted URLs are NOT accepted — fetch the bytes yourself and re-encode. **Strongly recommended**: pre-process images on the client by downscaling to ≤1600 px on the longest side and re-encoding as WebP at quality 0.85 (the browser client already does this automatically; non-browser clients should match it to stay well under the 10 MB per-board total). `thumbDataUrl` must be JPEG/PNG/WebP (no SVG), ≤8 kB. Validation errors return `400 {code:"invalid_payload", field, reason}` naming exactly which part of the payload failed. - `POST /api/boards/{boardId}/links` — drop a clickable URL capsule. Body: `{ id?, x, y, url, author? }`. Response includes `kind: "link"`. - `POST /api/boards/{boardId}/{kind}/{itemId}/move` — move ANY item without retyping content. `kind` ∈ `texts | links | strokes | images`. Body: `{ x, y }`. Strokes: the whole point array is translated so the bbox top-left lands at (x, y). Images: no dataUrl re-upload needed — the server already has the bytes. - `DELETE /api/boards/{boardId}/{kind}/{itemId}` — delete an item. `kind` ∈ `texts | links | strokes | images`. Unknown kinds return `400` (no silent mis-targeting). Get the id from `/json/{boardId}`. Default author tag for REST mutations is `ai:rest` (vs `ai:claude` for MCP). Override with `author: "ai: