On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland <estifo...@gmail.com> wrote:

sshfs is cool but in a corporate environment it can't always be used. For
example fuse is not installed for end users on the servers I have access
to.

I would also be very wary of sshfs and multi-user access. Sqlite3 locking
on NFS doesn't always work well, I imagine that locking issues on sshfs

it doesn't? in which way? and are the mentioned problems restricted to NFS or other file systems (zfs, qfs, ...) as well? do you mean that a 'central' repository could be harmed if two users try to push at the same time (and would corruption propagate to the users' "local" repositories later on)? I do hope not so...


could well be worse.

sshfs is an excellent work-around for an expert user but not a replacement
for the feature of ssh transport.

yes I would love to see a stable solution not suffering from interference of terminal output (there are people out there loving the good old `fortune' as part of their login script...).

btw: why could fossil not simply(?) filter a reasonable amount of terminal output for the occurrence of a sufficiently strong magic pattern indicating that the "noise" has passed by and fossil can go to work? right now putting `echo " "' (sending a single blank) suffices to let the transfer fail. my understanding is that fossil _does_ send something like `echo test' (is this true). all unexpected output to tty from the login scripts would come _before_ that so why not test for receiving the expected text ('test' just being not unique/strong enough) at the end of whatever is send (up to a reasonable length)? is this a stupid idea?





On Sun, Nov 11, 2012 at 2:01 AM, Ramon Ribó <ram...@compassis.com> wrote:


> Sshfs didn't fix the problems that I was having with fossil+ssh, or at
least
> only did so partially.

Why not? In what sshfs failed to give you the equivalent functionality
than a remote access to a fossil database through ssh?



2012/11/11 Timothy Beyer <bey...@fastmail.net>

At Sat, 10 Nov 2012 22:31:57 +0100,
j. van den hoff wrote:
>
> thanks for responding.
> I managed to solve my problem in the meantime (see my previous mail in
> this thread), but I'll make a memo of sshfs and have a look at it.
>
> joerg
>

Sshfs didn't fix the problems that I was having with fossil+ssh, or at
least only did so partially. Though, the problems that I was having with
ssh were different.

What I'd recommend doing is tunneling http or https through ssh, and host all of your fossil repositories on the host computer on your web server of
choice via cgi.  I do that with lighttpd, and it works flawlessly.

Tim
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users



_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users




--
Using Opera's revolutionary email client: http://www.opera.com/mail/
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to