rendering 1000+ conversation nodes without melting the browser
ECHO collects hundreds of conversations per project. Town halls, consultations, interviews. Each conversation produces arguments, themes, and semantic connections. The visualizer needs to show all of this live, as conversations happen, without the browser catching fire.
Four Zustand stores, each with a specific job.
useArgumentStore- semantic arguments and summaries with their embeddingsuseVisualizerRuntimeStore- active recording state, chunk processing progressuseVisualizerSelectionStore- which nodes are highlighted, selected, focuseduseVisualizerSettingsStore- panel visibility toggles and display preferences
Splitting state this way is a performance strategy. When a new argument comes in from a live conversation, only ArgumentStore updates. Selection and settings stores don’t re-render. With 1000+ nodes, avoiding unnecessary re-renders is the difference between 60fps and 6fps.
Data flow:
- Real-time conversation events arrive via
useDbrEvents(our Directus-backed event system) - Audio chunks get transcribed, producing conversation segments
- Segments are processed into arguments with embeddings
- Embeddings feed into HDBSCAN clustering and minimum spanning tree generation
LocalMapContainerrenders the embedding-space clustering visualizationArgumentTreerenders the MST as an interactive graph
Performance patterns that actually mattered:
Memoized node lists. The argument list is derived from the store, but recomputing it on every render (especially with filtering and sorting) was killing performance. useMemo with proper dependency arrays brought frame times down significantly.
Refs for timers. The explore panel auto-generates titles from selected nodes after a 1.5-second debounce. Using useState for the timer caused infinite render loops because the state update triggered a re-render which reset the timer. useRef fixed it. Timer lives outside the render cycle.
requestAnimationFrame for progress tracking. Recording progress updates happen every frame. Doing this through state would mean a re-render per frame. Instead we write directly to a ref and use requestAnimationFrame to update the DOM.
Debounced subscription updates. When the argument store fires rapidly (new arguments from multiple active conversations), the visualizer debounces its subscription so it batches updates instead of processing each one individually.
Panels:
ExplorePanel- generates titles from contributing nodes with a timerSpotlightPanel- detail view for the active nodeRecentArgumentsPanel- last 12 arguments with relative timestamps
What I’d change: the Zustand store split works, but the event flow between stores is implicit. Arguments arrive, clustering updates, visualization re-renders, but there’s no explicit pipeline. A lightweight event bus between stores would make the data flow more traceable when debugging.
At 500+ concurrent participants this setup holds. Bottleneck isn’t rendering, it’s embedding computation on the backend. Frontend just needs to not drop frames while data streams in.