On "The DIY Bitemporality Challenge" blog post

Hello, I found the recent XTDB blog article interesting read The DIY Bitemporality Challenge · XTDB.

Working with stock levels data in non temporally designed legacy (MySQL) database application myself and have been solving past grief caused by lack of auditability and recoverability when software level logging fails.

While learning from XTDB type architecture as a good example and building a more robust custom audit logging within the application with Lisp, I found a couple methods to achieve some similar benefits natively within the MySQL database.

Introduce WITH SYSTEM VERSIONING in a table definition. This is more complex in PostgreSQL but also available. Not fully bi-temporal as I understand but historic data is kept available and is queryable with snapshot dates. Assessing how well this could be applied with the original legacy database tables.

Another database level trick also might be to record table transaction information via MySQL Trigger, producing a separate audit log table by running a separate audit log insert on every transaction. Performance implications regarding to this would be interesting to see in practice, but possibly could be less intrusive to adapt on top of an existing legacy DB model without touching the original table schema definitions directly that upstream development keeps developing.

1 Like

Hey @jasalt thanks for the comment! It’s always useful to hear experiences of working around these mutability/reporting issues in traditional SQL databases. How did you handle the migration of record history as the schema evolves? Did you have audit triggers configured for every table by default from Day 1, or did you only set them up when you realised the requirements demanded it?

How did you handle the migration of record history as the schema evolves?

@refset Good question. This could pose issues if schema was evolving as manual migration scripting would be required. In my case, I’m mainly interested in product stock levels across warehouses (unique product identifier, warehouse identifier, stock quantity) and knowing that the schema regarding this information would not suddenly change it should be manageable.

Did you have audit triggers configured for every table by default from Day 1, or did you only set them up when you realised the requirements demanded it?

I haven’t experimented with triggers yet and the system that has been running for some years. Initially when need for better data auditability arose, a daily scheduled stock levels data export script was set up and some historical data was recovered from DB backups for analysis to some effect.

Looking at the bigger picture however I realize that with logging on the database level recording the sources that trigger changes in data would not be very effective, so it still needs to compliment application level audit logging.

1 Like