V2 start-node listens on port 8080?

If I paste the provided deps.edn file at Getting started (Clojure) | XTDB into my local directory and then run the following:

$ clj
Clojure 1.11.1
user=> (require 'xtdb.node)
nil
user=> (def node (xtdb.node/start-node))
SLF4J: No SLF4J providers were found.
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See https://www.slf4j.org/codes.html#noProviders for further details.
#'user/node

Then in another terminal I can get a response from localhost:8080:

$ curl http://localhost:8080
<h1>404 Not Found</h1>No context found for request

(If I call (.close node), then curl gives me curl: (7) Failed to connect to localhost port 8080: Connection refused.)

I’m curious what’s running on that port, if it’s supposed to be running, and if there’s a way to disable it? I typically run my webserver on port 8080, though for now I can of course just use a different port.

Hey @jacobobryant, how’re you doing? :wave:

That’s our Prometheus metrics server, we should make that opt-in and better configurable - I’ve raised #3323

How’re you finding v2 btw?

Cheers,

James

Thanks!

I’ve finally started playing around with v2 and integrating it into Biff. The whole having-a-job thing is impacting my open-source work a bit ha ha. I’m enjoying v2; it’s pretty ergonomic. Being able to mix and match sql and xtql queries is awesome. I sometimes find simple queries to be easier/shorter to just use sql; I think that’ll be a huge plus for Biff users too (start with SQL, gradually experiment with XTQL). XTQL is of course also much better for constructing stuff programmatically; e.g. I coded up a little “upsert” helper function last night:

(defn bind-template [ks]
  (into {}
        (map (fn [k]
               [k (symbol (str/replace (str k) #"^:" "\\$"))]))
        ks))

(defn upsert [node table document {:keys [on defaults]}]
  (let [new-doc (merge {:xt/id (random-uuid)} document defaults)
        _ (when-not (malli/validate table new-doc)
            (throw (ex-info "Document doesn't match table schema"
                            {:table table
                             :document document
                             :errors (:errors (malli/explain table new-doc))})))
        query (xt/template (from ~table [~(bind-template on)]))
        on-doc (select-keys document on)
        docs (xt/q node query {:args on-doc})]
    (if (empty? docs)
      [[:assert-not-exists query on-doc]
       [:put-docs table new-doc]]
      [[:update {:table table
                 :bind [(bind-template on)]
                 :set (apply dissoc document on)}
        on-doc]])))

(xt/submit-tx node
  (concat (upsert node :users
                  {:color "brown" :email "alice@example.com"}
                  {:on [:email] :defaults {:joined-at (Instant/now)}})
          ...))

For malli schema enforcement stuff, I think I can just validate the input args to :put-docs and :update (as in upsert above) without bothering too much about what the exact state of the documents in the DB currently is.

Not sure if I’ll end up with more helper functions besides upsert above. I like the idea of not writing a custom biff/submit-tx wrapper and instead just providing some helper functions that return data. Especially since plain :update takes care of the most common custom operation that v1 biff/submit-tx provides.

Anyway, planning to finish rewriting Biff’s starter app with v2 this week (just on a separate experimental branch until v2 is stable) and then might have a bunch more questions :slight_smile: . Some stuff that would be nice for Biff that’s come to mind so far:

  • xt/listen + xt/open-tx-log
  • Postgres transaction log
  • DigitalOcean S3 for storage – in general it is/is supposed to be AWS-S3-compatible, but I see the S3 docs mention setting up other AWS-specific stuff (SNS). Would DigitalOcean S3 for storage even be workable? If not, maybe postgres for both storage and tx log would be an option for deploying on digitalocean?
  • custom indexers – I haven’t done anything with this on v1 yet, but it’s next on my list after v2 stuff. I was planning to look at the lucene module source and look into setting up custom index(es) for derived application/domain data (“you have N unread messages” etc etc).

Actually maybe that upsert function oughtta be a transaction function…

This is great feedback, thanks :smiling_face:

Being able to mix and match sql and xtql queries is awesome. I sometimes find simple queries to be easier/shorter to just use sql; I think that’ll be a huge plus for Biff users too (start with SQL, gradually experiment with XTQL).

Openly, this is something we’re currently experimenting with taking a lot further - having the two worlds is quite confusing, especially for people who haven’t used XT previously or have any experience with the Clojure world. If we can get the same benefits (i.e. data-orientation, composability, debugging, unify etc) but bring the two worlds closer together, I think this’d be a big win.

For malli schema enforcement stuff, I think I can just validate the input args to :put-docs and :update (as in upsert above) without bothering too much about what the exact state of the documents in the DB currently is.

Yep, that’s the route we tend to go too - catch as many errors as we can at that stage, on the assumption every mutation to the DB goes through that route, and then the ones we can’t (i.e. serializability) go in the DB itself, either via DML (if it can do it - hopefully a lot more can now, especially with :assert-exists) or tx-fns, as you say.

  • xt/listen + xt/open-tx-log

noted, will add Biff as a :heavy_plus_sign: on these :slight_smile:

  • Postgres transaction log

I’m not so sure this one will happen - if anything it’s looking like we may bring the tx-log in-node instead (but no promises there). In any event, we’re certainly aware of and agree with the desire for a non-Kafka tx-log.

  • DigitalOcean S3 for storage – in general it is/is supposed to be AWS-S3-compatible, but I see the S3 docs mention setting up other AWS-specific stuff (SNS). Would DigitalOcean S3 for storage even be workable? If not, maybe postgres for both storage and tx log would be an option for deploying on digitalocean?

Yeah - the need for SNS here is to get notifications of new files - essentially so that we don’t have to list-files more than we need to to find out what’s new. I don’t know if DO supports that - tbh, we’ve been focusing more on the Big Three, but we can probably figure out an equivalent on DO.

custom indexers

This’ll hopefully be something we can do better when we have an in-house tx-log :slight_smile:

Thanks as always @jacobobryant :pray:

James

1 Like

Interesting stuff, thanks!