rendering 1000+ conversation nodes without melting the browser

2 min read Updated February 6, 2026

ECHO collects hundreds of conversations per project. Town halls, consultations, interviews. Each conversation produces arguments, themes, and semantic connections. The visualizer needs to show all of this live, as conversations happen, without the browser catching fire.

Four Zustand stores, each with a specific job.

  • useArgumentStore - semantic arguments and summaries with their embeddings
  • useVisualizerRuntimeStore - active recording state, chunk processing progress
  • useVisualizerSelectionStore - which nodes are highlighted, selected, focused
  • useVisualizerSettingsStore - panel visibility toggles and display preferences

Splitting state this way is a performance strategy. When a new argument comes in from a live conversation, only ArgumentStore updates. Selection and settings stores don’t re-render. With 1000+ nodes, avoiding unnecessary re-renders is the difference between 60fps and 6fps.

Data flow:

  1. Real-time conversation events arrive via useDbrEvents (our Directus-backed event system)
  2. Audio chunks get transcribed, producing conversation segments
  3. Segments are processed into arguments with embeddings
  4. Embeddings feed into HDBSCAN clustering and minimum spanning tree generation
  5. LocalMapContainer renders the embedding-space clustering visualization
  6. ArgumentTree renders the MST as an interactive graph

Performance patterns that actually mattered:

Memoized node lists. The argument list is derived from the store, but recomputing it on every render (especially with filtering and sorting) was killing performance. useMemo with proper dependency arrays brought frame times down significantly.

Refs for timers. The explore panel auto-generates titles from selected nodes after a 1.5-second debounce. Using useState for the timer caused infinite render loops because the state update triggered a re-render which reset the timer. useRef fixed it. Timer lives outside the render cycle.

requestAnimationFrame for progress tracking. Recording progress updates happen every frame. Doing this through state would mean a re-render per frame. Instead we write directly to a ref and use requestAnimationFrame to update the DOM.

Debounced subscription updates. When the argument store fires rapidly (new arguments from multiple active conversations), the visualizer debounces its subscription so it batches updates instead of processing each one individually.

Panels:

  • ExplorePanel - generates titles from contributing nodes with a timer
  • SpotlightPanel - detail view for the active node
  • RecentArgumentsPanel - last 12 arguments with relative timestamps

What I’d change: the Zustand store split works, but the event flow between stores is implicit. Arguments arrive, clustering updates, visualization re-renders, but there’s no explicit pipeline. A lightweight event bus between stores would make the data flow more traceable when debugging.

At 500+ concurrent participants this setup holds. Bottleneck isn’t rendering, it’s embedding computation on the backend. Frontend just needs to not drop frames while data streams in.