What kind of startup times you are seeing with/out checkpoints?

We are running Postgres as doc and tx store, and on separate instances XTDB/rocksdb node within the backend process. The backend disk (well, the whole node) is ephemeral. At this point we have some tens of thousands, approaching a 100k documents in the stores, and startups (to point where the XTDB node is in sync with latest transactions) are taking about three minutes, with what seems roughly linear increase with the amount of data we have (though we haven’t really plotted this, just looked at the times after inserting new datasets in bulk).

We are expecting production datasets to be thousands, if not tens of thousands times larger than this. At this point it’s clear that we have to do checkpointing sooner rather than later, since it doesn’t seem like the startup time can be improved with allocating more CPU cores or RAM to the node. The Postgres doesn’t seem to be loaded at all with CPU and RAM use below 10%, and the network traffic going out when a node is starting peaking at only 100kBps. So it would seem that the bottleneck is something single threaded on the XTDB node.

That said, I would be interested in hearing what kind of startup times others are seeing, with how many documents and what kind of setup. For example does using Kafka show markedly difference speed, and how large datasets could checkpoints handle? Based on Slack discussions, even checkpoints are not trivial with for example S3 having timeouts with fetches when not using a proper S3 client.

1 Like

Copying some of the follow-up Clojurians Slack discussion here for posterity (which followed the 1.23.0 release that fairly dramatically improved indexing times for various users):

Hukka 11 days ago
Hmm. Testing index building time. On local machine between 1.20.0 and 1.23.0 it goes from 230 seconds to 70s. But on cloud run from 210s to 180s. Local machine has loads more ram and CPU, so I tested more cores. 2 cpus lowers it to 90s and 4 to 70s. Very interesting that it now can use at least two cores with almost perfect efficiency, four helping a bit more.
3 replies

wotbrew 11 days ago
We’ve permitted more Rocks compaction jobs on multi-core machines (2 or ncpus-1, whichever is higher)

wotbrew 11 days ago
Indexing is pipelined across multiple threads too, which you want cores available for. (edited)

Hukka 11 days ago
ncpu-1 means it wasn’t as bad at scaling as I thought

Our indices are currently about 300MB, and it takes about 17 seconds for a new Cloud Run node to download and import checkpoint from Google Cloud Storage. Which is surprisingly slow. We have to see how it keeps scaling.

1 Like