Ugh, that is bad news. I was interested in the ssh method for ease of use as 
well
as its encrypted communications ... I suppose nfs is another possibility, but 
I'm
not a fan for several reasons. And maybe iscsi, if 

Here is what I'm currently attempting to accomplish:

We have redundant storage options (SAN, NAS) to leverage. These are largely 
static
services whose hardware rarely changes. Those get backed up and some of them are
also replicated off-site for DR purposes. I am told to trust the storage, and 
find
ways to make our apps and services more redundant.

We are using containers in docker swarm mode for other apps, so I was trying to
find a way to fit fossil into that picture. Your load balancer points to all
backend swarm nodes. One instance of fossil service can be fired up on any node,
and all nodes redirect traffic to the correct location. If the node running 
fossil
goes away, an instance is started elsewhere. For this to work, of course, I need
shared storage. That's where the NAS and sshfs came into play.

I've been using fossil in a container for quite a long while, just not in swarm
mode - so the app container is not decoupled from the data, which makes failing
over to another node quite a bit more difficult. I do understand fossil uses
sqlite, which of course is *not* a network-friendly database such as mysql. Its
simplicity is why I love it, and why I will stick with it. I'm just hoping to 
find
a solution to my need that is simple - sshfs would have been great in that 
regard.

Any other simple solution would be welcomed. Otherwise, I'll just have to stick
with the tried-and-true backup and restore method, and not being able to move 
the
fossil service around very easily. But I'm sure fossil won't be the only service
in that category - I just started with fossil because I *thought* it would be an
easy win.

Thanks for your insight.

----- On Dec 21, 2017, at 2:02 PM, Warren Young [email protected] wrote:

> On Dec 21, 2017, at 11:40 AM, [email protected] wrote:
>> 
>>  [email protected]:fossils on /fossils type fuse.sshfs
>>  (rw,nosuid,nodev,relatime,user_id=0,group_id=0)
> 
> Running SQLite — upon which Fossil is based — over sshfs is a bad idea.  The
> current implementation doesn’t even try to implement the file locking
> operation:
> 
>    https://github.com/libfuse/sshfs/blob/master/sshfs.c#L3304
> 
> That structure definition would have to include the “.lock” member for the 
> sort
> of locking SQLite does for this to be safe.
> 
> See also:
> 
>    
> https://www.sqlite.org/howtocorrupt.html#_filesystems_with_broken_or_missing_lock_implementations
> 
> There are other network file systems with locking semantics, but it’s often
> optional, and when present and enabled, it is often buggy.
> 
> Here’s hoping you have some other option.  Best would be to store the 
> *.fossils
> inside the container.  Second best would be to map a [virtual] block device
> into the container and put a normal Linux file system on it.
> _______________________________________________
> fossil-users mailing list
> [email protected]
> http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users
_______________________________________________
fossil-users mailing list
[email protected]
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to