This is something I've not thought of - and I think this is how the fossil source itself is propagated to its official mirrors. I don't know why this didn't occur to me, unless it is simply an instance of: "When you are a hammer, everything is a nail." And I've been looking at container-based replication (eg. docker swarm mode).
At any rate, my comment regarding fossil not being the only service in the category was related apps which were difficult to separate from their data. The fossil front-end can't be separated from the data because the front-end and back-end are the same. A mysql database server cannot be separated from its data, but any of the front-ends could be - because they are built to obtain their data over the network. In the end, I hate that I spent two days on something which I should have logically mapped around before I got started - but I've learned some, so it wasn't all wasted time. I'll look at using fossil to provide the replication for itself and move on to the next service. Thanks again for your insight. ----- On Dec 21, 2017, at 4:10 PM, Warren Young war...@etr-usa.com wrote: > On Dec 21, 2017, at 1:00 PM, dewey.hyl...@gmail.com wrote: >> >> That's where the NAS and sshfs came into play. > > You seem to be trying to use containers and such to provide distributed > service, > but Fossil already does that: it’s a DVCS. There’s no one telling you it must > live in only one place. > > Therefore: > > Option 1: Run the container anywhere you like, but with its internal Fossil > storing to the container’s view of the host OS, not to some other machine over > a network file system. Then from another computer, clone that repository onto > the SAN or NAS. Periodically, run a sync. Now your repo is both in the > container and on the NAS/SAN. > > Option 2: If the NAS permits, run a Fossil instance there. Clone it into the > container for actual use. Whether syncs mostly go first to the container or > to > the NAS and are then pushed to the other doesn’t much matter. Again, think > distributed. > > Either way, Fossil get a local real POSIX-compliant file system for SQLite, > and > uses its own sync protocol for inter-host operations, which means that SQLite > transactions end up avoiding the need to worry about network unreliability. > The clone/push will either complete successfully or it will be wholly rolled > back to the prior safe state. > >> I'm sure fossil won't be the only service >> in that category - I just started with fossil because I *thought* it would >> be an >> easy win. > > Any DBMS is going to have problems with sshfs. It’s not something special to > SQLite. > > If you mean to reference VCSes competing with Fossil, it’s at best a “push” > (in > the poker sense) when it comes to networked data reliability with your current > storage design, simply because reliable storage in the face of multiple > writers > requires correct locking. > > Switching to another VCS may even make things worse. Fossil looks like it is > causing problems here, but only because it’s trying to do things in an > ACID-compliant fashion, where other systems might not even try, and so > *appear* > to be less problem-free. > > I’ve tried researching Git ACID compliance and concurrency, and all I see is > raw > speculation, no hard claims from people who actually know what they’re talking > about, and have battle-tested it. > > SQLite, by contrast, is very well known to be a durable data store…*if* you > put > it on a filesystem with correct locking semantics! > _______________________________________________ > fossil-users mailing list > fossil-users@lists.fossil-scm.org > http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users _______________________________________________ fossil-users mailing list fossil-users@lists.fossil-scm.org http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users