Thomas, My apologies for not replying sooner. Oddly Google Groups did not flag your response as new and I missed it.
Here is the scenario: We have a network of machines connected on a private LAN/WAN -- upwards of 500 machines -- all of which need a synchronized copy of an embedded H2 database that is (of course) subject to change. We have a system in place which allows us to easily send around synchronization messages. And since we are read- mostly, we accept a single primary with many secondaries. This architecture was chosen because each system must have the ability to run standalone for extended periods of time in the case where the network becomes unavailable. We also have the requirement to -- under normal circumstances -- be able to make changes to the database -- add some new shared data for example -- and have it on the clients within a few seconds. We were hoping to be able to use some sort of transaction log to play back transactions on each of the secondary databases without having to build up the SQL from the trigger change objects. One other note: We are bandwidth limited on the WAN, so we are very cognizant of being as efficient as possible on the wire. Hence serializing heavy objects is generally avoided and compressed strings and byte arrays are used. Regards, Neal On Mar 22, 1:10 pm, Thomas Mueller <[email protected]> wrote: > Hi, > > > The specific need is: to capture the complete and orded list of > > transactions (DML & DDL) that a frontend db has commited between two > > specific points in time ( Check Points) in a portable format of retained > > transaction log fragment. That can be SQL batch script or any other > > replay-able form. > > I would like to understand what is the problem you want to solve. It's > always interesting to know how you would solve the problem, but for me > it's more important to know the problem currently. Maybe there are > other ways to solve it. > > Example (I just made that up): "I write a web-based backup application > that backs up all files to the internet. A competitor to 'Dropbox'. To > do that, I want to use Amazon EC2 instances, and plan to use H2 to > store the backup meta data. The meta data needs to be 100% fail safe, > so I plan to implement some sort of clustering so data is not stored > just in one place but in at least two places, not only on Amazon but > also on my own server." > > > The use case is any scenario where it's needed to efficiently capture > > some workload with the purpose of: > > - propagate to others databases > > - Incremental backup implementation (checkpoint with full backup + > > transaction log retention // restore full backup + transaction log replay ) > > - audit or stats workload > > That's still quite abstract. > > H2 already supports "clustering". From your description I don't know > why you can't use that, or what features are missing. An alternative > is a replicating file system implementation that stores all changes > not just in the local files but also sends the changes to a remote > system. > > But first I would like to know the concrete use case. > > Regards, > Thomas -- You received this message because you are subscribed to the Google Groups "H2 Database" group. To post to this group, send email to [email protected]. To unsubscribe from this group, send email to [email protected]. For more options, visit this group at http://groups.google.com/group/h2-database?hl=en.
