General query performance. What should I expect from in-memory DB?

I have a question about query resource consumption.
Let’s say I have a bunch of queries and my database is stored in RAM. I want to know what to expect if I run… let’s say 10k queries to random locations of the database and random entities with random path-lengths between them. What should I expect?
Will it blow up my CPU or will it blow up my RAM?
And what if I do it multithreadedly? Let’s say I create a bunch of db instances and start running the queries on all possible threads or a thread pool. What should I expect?
What are the trade-offs and when?

Hey @invertisment :wave:

Usual caveat: ‘it depends’. Our in-memory KV store isn’t massively optimised (it’s pretty much just a Clojure sorted-map), on the assumption it’s mostly used for non-production use-cases - you might find that a throwaway RocksDB/LMDB instance is faster if they can cache your entire database in memory.

My guess (although best to measure it with real-life scenarios) would be that CPU would be the bottleneck in most cases - XT queries run lazily with a relatively small working set, so the RAM usage for each query tends to stay quite low.

Regarding multi-threading - each db instance should only be used from one thread at a time. The query indices are stored in immutable data structures (in the in-memory KV stores), which, once you’ve taken a snapshot (using db) means multi-threaded access is cheap. (In Rocks and LMDB this uses their MVCC snapshots, so a similar principle applies)

Hope this helps, and let us know if you have further questions :slightly_smiling_face: