Bench results
These numbers anchor every claim on the homepage. Generated by running the same dataset through apps/bench — pretable against three real third-party grids: AG Grid Community, TanStack Table v8 with TanStack Virtual, and MUI X DataGrid Community. We measure scroll frame timings, blank gaps under scroll, row-height fidelity for wrapped text, and anchor stability across rebuilds. All raw data is in the repo; this page renders one milestone JSON file.
Runset 2026-05-09t00-49-34-903z · Chromium · S2 (3,000 rows, wrapped multilingual messages) · hypothesis scale · 3 repeats per adapter. Pretable / MUI scroll p95 also re-measured at 20 repeats — parity confirmed (see below).
H1 — wrapped-text scroll
Scenario S2(3,000 rows, wrapped multilingual messages, scroll script). Frame p95 measured via Performance Observer; row-height error measured by comparing each adapter’s rendered DOM row height against the engine’s planned height. Lower is better for every column.
| Adapter | Frame p95 (ms) | Row height error (px) | Blank gaps | Verdict |
|---|---|---|---|---|
| pretable | 9.7 | 1 | 0 | parity at n=20 (full quality pass) |
| AG Grid Community | 16.7 | 2 | 1 | 1.9× slower; 1 blank gap, row height drift 2px |
| TanStack Table | 16.7 | 0 | 1 | 1.9× slower; 1 blank gap |
| MUI X DataGrid Community | 8.7 | 1 | 0 | parity at n=20 (full quality pass) |
On this dataset, pretable and MUI X DataGrid Community sit at parity on scroll frame p95: a 20-repeat re-measurement gives pretable 9.07ms ± 0.20 and MUI 9.13ms ± 0.19. The single-repeat snapshot above (pretable 9.7ms vs 8.7ms) read as a small MUI lead; under tighter sampling that gap is well inside the 2σ noise floor (≈ 0.40ms). Both adapters clear the single 60Hz frame budget with zero blank gaps and ≤ 1px row-height drift.
AG Grid Community and TanStack Table land at roughly twice that frame p95 (~16.7ms) and both drop a blank gap during the scripted scroll; AG Grid additionally drifts 2px on row height, a sign that wrapped-cell layout doesn’t round-trip through its line-height pipeline as cleanly as pretable’s text-core does.
The honest read: pretable’s wedge on this script isn’t a raw frame-speed lead over the best comparator — it’s parity on raw frame timing, with the combination of zero blank gaps, zero anchor shift, and ≤ 1px row-height fidelity at full-grid feature weight. MUI matches pretable on quality but ships a fundamentally different feature surface (no headless engine, no streaming primitives, no theming-as-data). The H1 evaluator marks this run satisfied after the high-repeat correction; the original failing verdict from the n=3 snapshot was a low-sample artifact, not a real regression.
Interactions (sort, filter)
Scenario S2 (3,000 rows, wrapped multilingual messages). Sort applies a column-state change; filter-metadata applies an equals filter on a metadata column; filter-text applies a contains filter on the wrapped-text primary column. Latency measured from trigger to first changed frame; lower is better.
| Adapter | sort p95 (ms) | filter-metadata p95 (ms) | filter-text p95 (ms) | Verdict |
|---|---|---|---|---|
| pretable | 16.5 | 16.0 | 17.7 | fastest tied; full quality pass |
| AG Grid Community | 58.3 | 49.9 | 50.0 | 2.8–3.5× slower |
| TanStack Table | 34.4 | 15.7 | 40.2 | 1.0–2.3× slower |
| MUI X DataGrid Community | 35.0 | 33.4 | 33.3 | 1.9–2.1× slower |
Pretable sorts and filters 3,000 wrapped-text rows in 17–18 ms across all three scripts — about a millisecond over the single 60Hz frame budget on every interaction script, but 2–3.5× faster than every measured comparator. AG Grid Community runs sort and filter 3–3.5× slower; MUI X DataGrid Community lands at roughly 2× slower across all three scripts. TanStack Table v8 + TanStack Virtual runs ~2× slower on average, with high variance on filter-metadata (samples span 8–42 ms at n=8).
High-repeat (n=20) follow-up confirms pretable is reliably 1–2 ms over budget on all three scripts (sort 17.10 ± 1.83 ms; filter-metadata 17.51 ± 2.44 ms; filter-text 16.79 ± 0.31 ms). The comparative wedge is the story, not absolute single-frame compliance — pretable’s wrapped-text filter pipeline is on the roadmap for a perf-fix pass. Like the scroll story, the H6/H7/H8 evaluators check pretable’s absolute thresholds (≤ 32 ms interaction p95) rather than comparator parity; all three hypotheses stay satisfied.
Streaming
H13–H15 (streaming update frame budget, operating envelope, and row stability under stream) require S5 and rate-tagged update runs; this matrix run is S2 only, so those hypotheses are reported as insufficient rather than re-evaluated. The last full streaming evidence remains in the milestone archive.
Methodology
- Same dataset, same script. Each adapter runs the canonical S2 scenario from
@pretable-internal/scenario-datathrough identical interaction plans. The harness is in@pretable-internal/bench-runner. - Repeated-run medians. 3 samples per (adapter, scenario, script) tuple in this runset. Reported value is the median; full per-sample data is in the runset JSON.
- Idiomatic out-of-the-box config. AG Grid uses
themeQuartzwith Community modules registered, sortable + filterable + resizable defaults, andapplyTransactionupdates. TanStack usesuseReactTablewithgetCoreRowModel/getSortedRowModel/getFilteredRowModelplus TanStack Virtual for row virtualization. MUI X usesDataGridfrom@mui/x-data-gridwith default sort/filter/resizable enabled. Pretable uses RAF-batchedgrid.applyTransactionvia@pretable/stream-adapter. None of the adapters carry per-bench tuning — what we measure is what each library ships out of the box. - Wedge integrity: the pretable adapter renders the same
<PretableSurface>the website’s homepage hero uses. What we measure is what you ship.
Run it yourself
The bench is part of the open-source repo. Clone, install, run:
git clone https://github.com/cacheplane/pretable.git
cd pretable
pnpm install
pnpm bench:matrix \
--project=chromium \
--adapters=pretable,ag-grid,tanstack,mui \
--scenarios=S2 --scripts=initial,scroll \
--scale=hypothesis --repeats=3
# Output: status/runsets/<timestamp>.hypotheses.jsonFor the full contributor walkthrough, see apps/bench/README.md.
Raw data
status/milestones/2026-05-08-b2-scroll-summary.json— per-adapter scroll medians (this page’s table)status/milestones/2026-05-08-b2-comparative-bench.hypotheses.json— full H1–H21 evaluator report for this runsetstatus/milestones/— milestone archive (H1 baseline, S3 column virtualization, streaming, selection/nav)