$ pretable — vol. 2 · no. 1

The fastest data grid for React.
Built for the AI era.

60fps under streaming load. Zero row drift. A deterministic engine designed for live data, agent output, and real-time telemetry — not retrofitted from a batch-era grid.

Read the docs →

MIT licensed · open source

02 · how it works

DOM measuring sucks. We use math. It's hard.

Wrapped row heights are computed with character-width tables and font metrics — pure arithmetic. No getBoundingClientRect, no forced reflow, no measure-on-mount. The DOM is touched exactly once per frame, at commit. The five-stage pipeline below is what enforces that discipline.

  1. 01Sourcestream-adapter
    Row[] | Patch
  2. 02Enginegrid-core
    Snapshot
  3. 03Viewportlayout-core + text-core
    RenderPlan
  4. 04Rendererrenderer-dom
    Element[]
  5. 05Framebrowser
  1. 01

    Source

    Streaming patches and static rows treated identically.

    • Token-by-token patches via SSE, WebSocket, or any async iterable
    • Static Row[] arrays use the same input shape
    • No "streaming mode" toggle — adapters convert both to engine input
    Row[] | Patchstream-adapter
  2. 02

    Engine

    Pure reducer. Sort, filter, selection, row-id stability.

    • (rows, columns, sort, filter, selection) → Snapshot
    • Deterministic — same inputs always produce the same output, every frame
    • Row-id keys are first-class — selection survives filters, sorts, and live patches
    • Under 3,000 lines. Read it end-to-end in one sitting.
    Snapshotgrid-core
  3. 03

    Viewport

    Row-height plan + virtualization range. Off-DOM measurement.

    • Wrapped row heights computed with character-width tables and font metrics — pure arithmetic
    • No getBoundingClientRect, no forced reflow, no measure-on-mount
    • Virtualization range derived from scroll position + total planned height
    • Off-screen rows excluded from the plan — no phantom DOM
    RenderPlanlayout-core + text-core
  4. 04

    Renderer

    The only stage that touches the DOM.

    • Diffs the previous RenderPlan against the new one
    • Patches affected rows; reuses unchanged DOM nodes
    • Selection, sort indicators, filter chips all data-driven from the snapshot — no imperative state
    Element[]renderer-dom
  5. 05

    Frame

    RAF coalesces patches per animation frame.

    • 100 to 25,000 patches/sec all collapse to one snapshot per frame
    • Long tasks: zero across the operating envelope
    • Selection, cursor, scroll position never lost mid-frame
    60fpsbrowser
  • Built for agentic apps.

    LLM streams, partial JSON, tool-call traces — bursts of 100 to 25,000 patches/sec all collapse to one snapshot per animation frame. No per-token reflow. Selection survives every patch.

  • Engine is a pure function.

    (rows, columns, sort, filter, selection) → Snapshot. No imperative DOM. Streaming patches and batch arrays hit the same reducer — that's why selection survives every update.

  • RAF batches the stream.

    100 to 25,000 patches/sec all collapse to one snapshot per animation frame. Long tasks: zero across the operating envelope.

  • Telemetry stays off-DOM.

    Render counts, viewport range, planned height — all data emitted by the engine, never read from the DOM. Zero measurement-induced thrash.

Read the source: packages/grid-core, layout-core, text-core, renderer-dom — under 3,000 lines combined.

Receipts, not claims.

  • 0
    blank gaps under scroll
  • 9ms
    frame p95 / wrapped scroll
  • ≤1px
    row-height fidelity
  • OpenAI · Anthropic · SSE
    streaming sources, one import

See them re-run in the bench →

03 · how we compare

How we compare.

Wrapped-text scroll at 3,000 rows on Chromium, against the three most-cited React grids. Pretable matches MUI at the front, ~1.7× ahead of AG Grid Community and TanStack on raw frame p95 — and is the only adapter here that clears every quality threshold (zero blank gaps, zero anchor shift, ≤1 px row-height drift) at full-grid feature weight. Interactive sort and filter run 2–3.5× faster than every measured comparator on the same dataset. See the methodology →

metricRecommended pathpretable1.7× slower scroll, 3× slower interaction; row-height driftAG GridHeadless; ~2× slower interactionTanStackScroll-p95 parity; 2× slower interactionMUI Xbudget
frame p95 (ms) — wrapped scroll9.0716.716.79.14≤ 16
row-height fidelity (px error)1201≤ 1
blank gaps under scroll01100
scroll anchor shift (px)0000≤ 16
sort latency p95 (ms) — interaction17.158.334.435.0
filter-metadata latency p95 (ms)17.549.915.733.4
filter-text latency p95 (ms)16.850.040.233.3
headless engine + React surfaceyesn/aengine onlyn/a
streaming pipeline (SSE → partial JSON → batcher → applyTransaction)yesn/an/an/a

Re-run the comparison → /bench

05 · streaming, by design

Built streaming-first. Not bolted-on.

Most grids accept streaming through an adapter layered onto a batch-era data model. Pretable's engine treats a 1,000-patch/sec stream and a static 3,000-row array as the same input shape — one reducer, one render path, one selection model. There's no "streaming mode" toggle.

  • One shape, one path

    applyTransaction({ add, update, remove }) is the only entry point into the engine. Static rows hit it via add(); SSE tokens hit it via the same method per chunk. The streaming adapter is a thin batcher around that interface — not a separate code path.

  • Selection survives every patch

    Row-id keys are first-class in the engine. Sort, filter, scroll position, focused row — none of it loses state mid-stream. Drag-select 200 rows during a 25k/sec patch storm and they stay selected.

API reference → /docs/streaming

06 · for engineers

For engineers: how it looks in your codebase.

Connect any token-streaming source — OpenAI Responses, Anthropic, or your own SSE — to a pretable grid. Selection survives every chunk.

Streaming chat grid — live
RoleContentTokensLatency
// @ts-nocheck — sample source for docs; not compiled as app code.
import { ChatGrid } from "./ChatGrid";

export default function Page() {
  return <ChatGrid prompt="Summarize the last 10 incidents" />;
}

Full reference: /docs/streaming

07 · what's in the box

Engineering credibility points.

Each feature backed by a bench scenario or demo. No claim without a click-to-prove.

  • Beginner-friendly

    60fps performance

    Sub-frame scroll p95 on wrapped text — at parity with the best full-grid comparator (MUI X) and ~1.7× ahead of AG Grid Community and TanStack. Zero blank gaps, zero anchor shift, ≤1 px row-height drift.

  • Intermediate setup

    Wrapped text, no jank

    Multi-line cells, variable row heights, smooth scrolling. Multilingual content tested.

  • Advanced — bring your own SSE

    Stream-aware

    Token-by-token rendering for OpenAI, Anthropic, and your own SSE. The full pipeline — partial-JSON parser, frame-budget batcher, applyTransaction wiring — ships as one import.

  • Intermediate — interaction state

    Selection survives filters

    Filter, sort, and reorder without losing your selection. Stable focus across mutations.

Get started

One import. Drop it in.

GitHub →