Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-12 Thread j. van den hoff
I've tested this here (with Fossil-4473a27f3b6e049e) but can only report  
partial success:


* it works using bash/ksh as login shell on the remote machine _if_ there  
is not too much text (the allocated buffer size (5) is still rather  
modest but usually sure sufficient) coming in from `.profile' over the ssh  
connection. so that's a clear step forward. however:


* it does _not_ work if the default verbose (multi-line/blank line  
separated multi-paragraph, but much shorter than 5 bytes) ubuntu motd  
stuff comes in. the (visible) offending text looks something like this:


8--
Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-32-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Mon Nov 12 13:20:41 CET 2012

  System load:  0.04  Processes:   114
  Usage of /:   72.3% of 9.96GB   Users logged in: 2
  Memory usage: 22%   IP address for eth0: 123.456.78.90
  Swap usage:   0%

  Graph this data and manage this system at  
https://landscape.canonical.com/


0 packages can be updated.
0 updates are security updates.
8--

  - what's strange is: if I copy this text into an `echo' command within  
`.profile' and then deactivate the MOTD (so seeminly getting the same  
stuff send over the
ssh connection during login), it works flawlessly!?. my guess would be  
that there are some unprintable characters/escapes sent as well which I do  
not see
so that copying the MOTD to `.profile' is not really the same thing as  
what is happening when ubuntu sends the stuff.


* it also does _not_ work (with bash that is: ksh keeps working) if I  
myself send some escape sequences from my login scripts (as mentioned in a
  previous mail intended to dynamically adjust my xterm-titlebars). what's  
happening here is completely unclear to me, since it seems bash specific.  
what's worse:
  issuing the respective `echo' directly in the script instead of within a  
shell-function (as is usually done in my setup) does not lead to a failure.
  my setup might be somewhat esoteric here, so maybe it's not too  
important, but it indicates of course that there still is something  
fundamentally not OK.


* and it does not at all work with tcsh as login shell on the remote  
machine (even if login is completely silent). in this case I get the error  
message

   tput: No value for $TERM and no -T specified
   TERM: Undefined variable.
   Fossil-4473a27f3b6e049e/fossil: ssh connection failed: [test1
   probe-4f5d9ab4]
  so, seemingly `tcsh' users are out of luck anyway.

questions:

* maybe the (echo/flush) process has to be iterated one further time to  
make fossil happy with ubuntu's motd (after all it's not the least  
frequent linux distro)?


* could fossil (or a debug version) not provide a (additional) hexdump (a  
la `hexdump -C' on linux) of the content of `zIn' instead of using
 `fossil_fatal(ssh connection failed: [%s], zIn);'? in this way one  
might be able to at least to recognize what exactly is coming in which  
might help in tracking
 down the source of the trouble: it need not be printable characters  
coming over the ssh connection after all.



j.


On Sun, 11 Nov 2012 23:44:31 +0100, Richard Hipp d...@sqlite.org wrote:

On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland estifo...@gmail.com  
wrote:


I'll top-post an answer to this one as this thread has wandered and  
gotten

very long, so who knows who is still following :)

I made a simple tweak to the ssh code that gets ssh working for me on
Ubuntu and may solve some of the login shell related problems that have
been reported with respect to ssh:


http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0



Not exactly the same patch, but something quite similar has been checked  
in
at http://www.fossil-scm.org/fossil/info/4473a27f3b - please try it out  
and

let me know if it clears any outstanding problems, or if I missed some
obvious benefit of Matt's patch in my refactoring.





Joerg iasked if this will make it into a future release. Can Richard or
one of the developers take a look at the change and comment?

Note that unfortunately this does not fix the issues I'm having with
fsecure ssh but I hope it gets us one step closer.

Thanks,

Matt
-=-



On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff  
veedeeh...@googlemail.comwrote:



On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com
wrote:

 On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff

veedeeh...@googlemail.com**wrote:

 On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland  
estifo...@gmail.com

wrote:

 sshfs is cool but in a corporate environment it can't always be  
used.

For


example fuse is not installed for end users on the servers I have
access
to.

I would also be very wary of sshfs and multi-user access. Sqlite3
locking
on NFS doesn't always 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-12 Thread Richard Hipp
On Mon, Nov 12, 2012 at 8:36 AM, j. van den hoff
veedeeh...@googlemail.comwrote:

 I've tested this here (with Fossil-4473a27f3b6e049e) but can only report
 partial success:

 * it works using bash/ksh as login shell on the remote machine _if_ there
 is not too much text (the allocated buffer size (5) is still rather
 modest but usually sure sufficient) coming in from `.profile' over the ssh
 connection. so that's a clear step forward. however:

 * it does _not_ work if the default verbose (multi-line/blank line
 separated multi-paragraph, but much shorter than 5 bytes) ubuntu motd
 stuff comes in. the (visible) offending text looks something like this:


Please try again using http://www.fossil-scm.org/fossil/info/00cf858afe and
let me know if the situation improves.  If it still is not working, please
run with the --sshtrace command-line option and send me the diagnostic
output.  Thanks.




 8**--**
 Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-32-generic x86_64)

  * Documentation:  https://help.ubuntu.com/

   System information as of Mon Nov 12 13:20:41 CET 2012

   System load:  0.04  Processes:   114
   Usage of /:   72.3% of 9.96GB   Users logged in: 2
   Memory usage: 22%   IP address for eth0: 123.456.78.90
   Swap usage:   0%

   Graph this data and manage this system at https://landscape.canonical.**
 com/ https://landscape.canonical.com/


 0 packages can be updated.
 0 updates are security updates.
 8**--**

   - what's strange is: if I copy this text into an `echo' command within
 `.profile' and then deactivate the MOTD (so seeminly getting the same stuff
 send over the
 ssh connection during login), it works flawlessly!?. my guess would be
 that there are some unprintable characters/escapes sent as well which I do
 not see
 so that copying the MOTD to `.profile' is not really the same thing as
 what is happening when ubuntu sends the stuff.

 * it also does _not_ work (with bash that is: ksh keeps working) if I
 myself send some escape sequences from my login scripts (as mentioned in a
   previous mail intended to dynamically adjust my xterm-titlebars). what's
 happening here is completely unclear to me, since it seems bash specific.
 what's worse:
   issuing the respective `echo' directly in the script instead of within a
 shell-function (as is usually done in my setup) does not lead to a failure.
   my setup might be somewhat esoteric here, so maybe it's not too
 important, but it indicates of course that there still is something
 fundamentally not OK.

 * and it does not at all work with tcsh as login shell on the remote
 machine (even if login is completely silent). in this case I get the error
 message
tput: No value for $TERM and no -T specified
TERM: Undefined variable.
Fossil-4473a27f3b6e049e/**fossil: ssh connection failed: [test1
probe-4f5d9ab4]
   so, seemingly `tcsh' users are out of luck anyway.

 questions:

 * maybe the (echo/flush) process has to be iterated one further time to
 make fossil happy with ubuntu's motd (after all it's not the least frequent
 linux distro)?

 * could fossil (or a debug version) not provide a (additional) hexdump (a
 la `hexdump -C' on linux) of the content of `zIn' instead of using
  `fossil_fatal(ssh connection failed: [%s], zIn);'? in this way one
 might be able to at least to recognize what exactly is coming in which
 might help in tracking
  down the source of the trouble: it need not be printable characters
 coming over the ssh connection after all.


 j.



 On Sun, 11 Nov 2012 23:44:31 +0100, Richard Hipp d...@sqlite.org wrote:

  On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland estifo...@gmail.com
 wrote:

  I'll top-post an answer to this one as this thread has wandered and
 gotten
 very long, so who knows who is still following :)

 I made a simple tweak to the ssh code that gets ssh working for me on
 Ubuntu and may solve some of the login shell related problems that have
 been reported with respect to ssh:


 http://www.kiatoa.com/cgi-bin/**fossils/fossil/fdiff?v1=**
 935bc0a983135b26v2=**61f9ddf1e2c8bbb0http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0


 Not exactly the same patch, but something quite similar has been checked
 in
 at 
 http://www.fossil-scm.org/**fossil/info/4473a27f3bhttp://www.fossil-scm.org/fossil/info/4473a27f3b-
  please try it out and
 let me know if it clears any outstanding problems, or if I missed some
 obvious benefit of Matt's patch in my refactoring.




 Joerg iasked if this will make it into a future release. Can Richard or
 one of the developers take a look at the change and comment?

 Note that unfortunately this does not fix the issues I'm having with
 fsecure ssh but I hope it gets us one step closer.

 Thanks,

 Matt
 -=-



 On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-12 Thread Richard Hipp
On Mon, Nov 12, 2012 at 9:51 AM, Richard Hipp d...@sqlite.org wrote:


 Please try again using http://www.fossil-scm.org/fossil/info/00cf858afeand 
 let me know if the situation improves.  If it still is not working,
 please run with the --sshtrace command-line option and send me the
 diagnostic output.  Thanks.


Further improvements.  Please try:
http://www.fossil-scm.org/fossil/info/5776dfad81


-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-12 Thread j. van den hoff

On Mon, 12 Nov 2012 15:51:04 +0100, Richard Hipp d...@sqlite.org wrote:


On Mon, Nov 12, 2012 at 8:36 AM, j. van den hoff
veedeeh...@googlemail.comwrote:


I've tested this here (with Fossil-4473a27f3b6e049e) but can only report
partial success:

* it works using bash/ksh as login shell on the remote machine _if_  
there

is not too much text (the allocated buffer size (5) is still rather
modest but usually sure sufficient) coming in from `.profile' over the  
ssh

connection. so that's a clear step forward. however:

* it does _not_ work if the default verbose (multi-line/blank line
separated multi-paragraph, but much shorter than 5 bytes) ubuntu  
motd

stuff comes in. the (visible) offending text looks something like this:



Please try again using http://www.fossil-scm.org/fossil/info/00cf858afe  
and

let me know if the situation improves.  If it still is not working,


indeed it does: congratulation!

* it now works without problems both with bash and ksh and does no longer  
choke on ubuntu's motd stuff (nor on 'my' escape sequences).
  putting excessively much text in the way still leads to a failure but  
that's probably rather a feature...


* with tcsh, I most of the time get
8---
tput: No value for $TERM and no -T specified
TERM: Undefined variable.
Bytes  Cards  Artifacts Deltas
Sent:  53  1  0  0
waiting for server...
8---
where it hangs infinitely (this is an extrapolation from the actually  
waited ~ 30 sec...)


sporadically, however, it does not issue 'waiting for server' (or it's  
overwritten to quickly) and actually completes successfully.
this might very well be a real tcsh issue (in which case one might contact  
the maintainers) but if this could reliably be handled as well I presume  
the problem is solved completely.
here, several colleagues (not me any more) still stick with tcsh for  
interactive work and it quite probably still is the default shell

on most BSD descendants (not on macosX, though).

thanks a lot.


please
run with the --sshtrace command-line option and send me the diagnostic
output.  Thanks.


I see the command in the source but it seems not to be recognized. how is  
it supposed to be called?








8**--**
Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-32-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Mon Nov 12 13:20:41 CET 2012

  System load:  0.04  Processes:   114
  Usage of /:   72.3% of 9.96GB   Users logged in: 2
  Memory usage: 22%   IP address for eth0: 123.456.78.90
  Swap usage:   0%

  Graph this data and manage this system at  
https://landscape.canonical.**

com/ https://landscape.canonical.com/


0 packages can be updated.
0 updates are security updates.
8**--**

  - what's strange is: if I copy this text into an `echo' command within
`.profile' and then deactivate the MOTD (so seeminly getting the same  
stuff

send over the
ssh connection during login), it works flawlessly!?. my guess would  
be
that there are some unprintable characters/escapes sent as well which I  
do

not see
so that copying the MOTD to `.profile' is not really the same thing  
as

what is happening when ubuntu sends the stuff.

* it also does _not_ work (with bash that is: ksh keeps working) if I
myself send some escape sequences from my login scripts (as mentioned  
in a
  previous mail intended to dynamically adjust my xterm-titlebars).  
what's
happening here is completely unclear to me, since it seems bash  
specific.

what's worse:
  issuing the respective `echo' directly in the script instead of  
within a
shell-function (as is usually done in my setup) does not lead to a  
failure.

  my setup might be somewhat esoteric here, so maybe it's not too
important, but it indicates of course that there still is something
fundamentally not OK.

* and it does not at all work with tcsh as login shell on the remote
machine (even if login is completely silent). in this case I get the  
error

message
   tput: No value for $TERM and no -T specified
   TERM: Undefined variable.
   Fossil-4473a27f3b6e049e/**fossil: ssh connection failed: [test1
   probe-4f5d9ab4]
  so, seemingly `tcsh' users are out of luck anyway.

questions:

* maybe the (echo/flush) process has to be iterated one further time to
make fossil happy with ubuntu's motd (after all it's not the least  
frequent

linux distro)?

* could fossil (or a debug version) not provide a (additional) hexdump  
(a

la `hexdump -C' on linux) of the content of `zIn' instead of using
 `fossil_fatal(ssh connection failed: [%s], zIn);'? in this way one
might be able to at least to recognize what exactly is coming in which
might help in tracking
 down the source of the trouble: it need not be printable characters

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-12 Thread Matt Welland
This works for me also but I had to comment out the adding of -p as the
port was coming though as 0 (this is using fsecure ssh, tcsh, SLES9.)

#ifdef __MINGW32__
  blob_appendf(zCmd,  -P %d, g.urlPort);
#else
  /* blob_appendf(zCmd,  -p %d, g.urlPort); */

I also haven't made sense of the URL syntax. In rsync I would specify rsync
-avz mrwellan@localhost:~/fossils/fossil.fossil but that doesn't seem to
work. The full path with two leading slashes seems to work.
==
 fsl set | grep ssh
ssh-command  (global) ssh -t
chlr11723 rm mt.fossil* ; ( cd .. ; make )  ../fossil clone
ssh://localhost://nfs/ch/disks/ch_home_disk002/mrwellan/fossils/megatest.fossil
mt.fossil
make: Nothing to be done for `all'.
ssh -t -p 0 localhost
Error: in option 'port': invalid option value
Broken pipe



On Mon, Nov 12, 2012 at 8:40 AM, j. van den hoff
veedeeh...@googlemail.comwrote:

 On Mon, 12 Nov 2012 15:51:04 +0100, Richard Hipp d...@sqlite.org wrote:

  On Mon, Nov 12, 2012 at 8:36 AM, j. van den hoff
 veedeeh...@googlemail.com**wrote:

  I've tested this here (with Fossil-4473a27f3b6e049e) but can only report
 partial success:

 * it works using bash/ksh as login shell on the remote machine _if_ there
 is not too much text (the allocated buffer size (5) is still rather
 modest but usually sure sufficient) coming in from `.profile' over the
 ssh
 connection. so that's a clear step forward. however:

 * it does _not_ work if the default verbose (multi-line/blank line
 separated multi-paragraph, but much shorter than 5 bytes) ubuntu motd
 stuff comes in. the (visible) offending text looks something like this:


 Please try again using 
 http://www.fossil-scm.org/**fossil/info/00cf858afehttp://www.fossil-scm.org/fossil/info/00cf858afeand
 let me know if the situation improves.  If it still is not working,


 indeed it does: congratulation!

 * it now works without problems both with bash and ksh and does no longer
 choke on ubuntu's motd stuff (nor on 'my' escape sequences).
   putting excessively much text in the way still leads to a failure but
 that's probably rather a feature...

 * with tcsh, I most of the time get
 8---

 tput: No value for $TERM and no -T specified
 TERM: Undefined variable.
 Bytes  Cards  Artifacts Deltas
 Sent:  53  1  0  0
 waiting for server...
 8---
 where it hangs infinitely (this is an extrapolation from the actually
 waited ~ 30 sec...)

 sporadically, however, it does not issue 'waiting for server' (or it's
 overwritten to quickly) and actually completes successfully.
 this might very well be a real tcsh issue (in which case one might contact
 the maintainers) but if this could reliably be handled as well I presume
 the problem is solved completely.
 here, several colleagues (not me any more) still stick with tcsh for
 interactive work and it quite probably still is the default shell
 on most BSD descendants (not on macosX, though).

 thanks a lot.


  please
 run with the --sshtrace command-line option and send me the diagnostic
 output.  Thanks.


 I see the command in the source but it seems not to be recognized. how is
 it supposed to be called?





 8**--**

 Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-32-generic x86_64)

  * Documentation:  https://help.ubuntu.com/

   System information as of Mon Nov 12 13:20:41 CET 2012

   System load:  0.04  Processes:   114
   Usage of /:   72.3% of 9.96GB   Users logged in: 2
   Memory usage: 22%   IP address for eth0: 123.456.78.90
   Swap usage:   0%

   Graph this data and manage this system at https://landscape.canonical.
 **
 com/ https://landscape.canonical.**com/https://landscape.canonical.com/
 



 0 packages can be updated.
 0 updates are security updates.
 8**--**


   - what's strange is: if I copy this text into an `echo' command within
 `.profile' and then deactivate the MOTD (so seeminly getting the same
 stuff
 send over the
 ssh connection during login), it works flawlessly!?. my guess would
 be
 that there are some unprintable characters/escapes sent as well which I
 do
 not see
 so that copying the MOTD to `.profile' is not really the same thing
 as
 what is happening when ubuntu sends the stuff.

 * it also does _not_ work (with bash that is: ksh keeps working) if I
 myself send some escape sequences from my login scripts (as mentioned in
 a
   previous mail intended to dynamically adjust my xterm-titlebars).
 what's
 happening here is completely unclear to me, since it seems bash specific.
 what's worse:
   issuing the respective `echo' directly in the script instead of within
 a
 shell-function (as is usually done in my setup) does not lead to a
 failure.
   my setup might be somewhat esoteric 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-12 Thread j. v. d. hoff

On Mon, 12 Nov 2012 16:23:22 +0100, Richard Hipp d...@sqlite.org wrote:


On Mon, Nov 12, 2012 at 9:51 AM, Richard Hipp d...@sqlite.org wrote:



Please try again using  
http://www.fossil-scm.org/fossil/info/00cf858afeand let me know if the  
situation improves.  If it still is not working,

please run with the --sshtrace command-line option and send me the
diagnostic output.  Thanks.



Further improvements.  Please try:
http://www.fossil-scm.org/fossil/info/5776dfad81


still works nicely with bash and ksh ;-). I think that is good news for  
fossil: ssh protocol should just work with any DVCS. running into  
trouble in this area might turn people off quickly I'd say. --


regarding behaviour under tcsh, this remains somewhat erratic for me:  
mostly I get the infinite 'waiting for server...' message but if I try  
repeatedly, one of the trials
(about every third or so) succeeds. so for tcsh users fossill might still  
make a mixed impression.


I, for one, am happy now (and have, as an aside, confirmed my impression  
that ksh93 really is a saner shell than bash).


j.







--
Using Opera's revolutionary email client: http://www.opera.com/mail/
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Ramon Ribó
 Sshfs didn't fix the problems that I was having with fossil+ssh, or at
least
 only did so partially.

Why not? In what sshfs failed to give you the equivalent functionality than
a remote access to a fossil database through ssh?


2012/11/11 Timothy Beyer bey...@fastmail.net

 At Sat, 10 Nov 2012 22:31:57 +0100,
 j. van den hoff wrote:
 
  thanks for responding.
  I managed to solve my problem in the meantime (see my previous mail in
  this thread), but I'll make a memo of sshfs and have a look at it.
 
  joerg
 

 Sshfs didn't fix the problems that I was having with fossil+ssh, or at
 least only did so partially.  Though, the problems that I was having with
 ssh were different.

 What I'd recommend doing is tunneling http or https through ssh, and host
 all of your fossil repositories on the host computer on your web server of
 choice via cgi.  I do that with lighttpd, and it works flawlessly.

 Tim
 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Matt Welland
sshfs is cool but in a corporate environment it can't always be used. For
example fuse is not installed for end users on the servers I have access
to.

I would also be very wary of sshfs and multi-user access. Sqlite3 locking
on NFS doesn't always work well, I imagine that locking issues on sshfs
could well be worse.

sshfs is an excellent work-around for an expert user but not a replacement
for the feature of ssh transport.




On Sun, Nov 11, 2012 at 2:01 AM, Ramon Ribó ram...@compassis.com wrote:


  Sshfs didn't fix the problems that I was having with fossil+ssh, or at
 least
  only did so partially.

 Why not? In what sshfs failed to give you the equivalent functionality
 than a remote access to a fossil database through ssh?



 2012/11/11 Timothy Beyer bey...@fastmail.net

 At Sat, 10 Nov 2012 22:31:57 +0100,
 j. van den hoff wrote:
 
  thanks for responding.
  I managed to solve my problem in the meantime (see my previous mail in
  this thread), but I'll make a memo of sshfs and have a look at it.
 
  joerg
 

 Sshfs didn't fix the problems that I was having with fossil+ssh, or at
 least only did so partially.  Though, the problems that I was having with
 ssh were different.

 What I'd recommend doing is tunneling http or https through ssh, and host
 all of your fossil repositories on the host computer on your web server of
 choice via cgi.  I do that with lighttpd, and it works flawlessly.

 Tim
 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users



 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread j. van den hoff
On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland estifo...@gmail.com  
wrote:



sshfs is cool but in a corporate environment it can't always be used. For
example fuse is not installed for end users on the servers I have access
to.

I would also be very wary of sshfs and multi-user access. Sqlite3 locking
on NFS doesn't always work well, I imagine that locking issues on sshfs


it doesn't? in which way? and are the mentioned problems restricted to NFS  
or other file systems (zfs, qfs, ...) as well?
do you mean that a 'central' repository could be harmed if two users try  
to push at the same time (and would corruption propagate to the users'  
local repositories later on)? I do hope not so...




could well be worse.

sshfs is an excellent work-around for an expert user but not a  
replacement

for the feature of ssh transport.


yes I would love to see a stable solution not suffering from interference  
of terminal output (there are people out there loving the good old  
`fortune' as part of their login script...).


btw: why could fossil not simply(?) filter a reasonable amount of terminal  
output for the occurrence of a sufficiently strong magic pattern  
indicating that the noise has passed by and fossil can go to work? right  
now putting `echo  ' (sending a single blank) suffices to let the  
transfer fail. my understanding is that fossil _does_ send something like  
`echo test' (is this true). all unexpected output to tty from the login  
scripts  would come _before_ that so why not test for receiving the  
expected text ('test' just being not unique/strong enough) at the end of  
whatever is send (up to a reasonable length)? is this a stupid idea?







On Sun, Nov 11, 2012 at 2:01 AM, Ramon Ribó ram...@compassis.com wrote:



 Sshfs didn't fix the problems that I was having with fossil+ssh, or at
least
 only did so partially.

Why not? In what sshfs failed to give you the equivalent functionality
than a remote access to a fossil database through ssh?



2012/11/11 Timothy Beyer bey...@fastmail.net


At Sat, 10 Nov 2012 22:31:57 +0100,
j. van den hoff wrote:

 thanks for responding.
 I managed to solve my problem in the meantime (see my previous mail  
in

 this thread), but I'll make a memo of sshfs and have a look at it.

 joerg


Sshfs didn't fix the problems that I was having with fossil+ssh, or at
least only did so partially.  Though, the problems that I was having  
with

ssh were different.

What I'd recommend doing is tunneling http or https through ssh, and  
host
all of your fossil repositories on the host computer on your web  
server of

choice via cgi.  I do that with lighttpd, and it works flawlessly.

Tim
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users




___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users





--
Using Opera's revolutionary email client: http://www.opera.com/mail/
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread j. v. d. hoff
On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com  
wrote:



On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
veedeeh...@googlemail.comwrote:


On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland estifo...@gmail.com
wrote:

 sshfs is cool but in a corporate environment it can't always be used.  
For
example fuse is not installed for end users on the servers I have  
access

to.

I would also be very wary of sshfs and multi-user access. Sqlite3  
locking

on NFS doesn't always work well, I imagine that locking issues on sshfs



it doesn't? in which way? and are the mentioned problems restricted to  
NFS

or other file systems (zfs, qfs, ...) as well?
do you mean that a 'central' repository could be harmed if two users try
to push at the same time (and would corruption propagate to the users'
local repositories later on)? I do hope not so...



I should have qualified that with the detail that historically NFS  
locking
has been reported as an issue by others but I myself have not seen it.  
What
I have seen in using sqlite3 and fossil very heavily on NFS is users  
using
kill -9 right off the bat rather than first trying with just kill. The  
lock

gets stuck set and only dumping the sqlite db to text and recreating it
seems to clear the lock (not sure but maybe sometimes copying to a new  
file

and moving back will clear the lock).

I've seen a corrupted db once or maybe twice but never been clear that it
was caused by concurrent access on NFS or not. Thankfully it is fossil  
and

recovery is a cp away.

Quite some time ago I did limited testing of concurrent access to an
sqlite3 db on AFS and GFS and it seemed to work fine. The AFS test was  
very

slow but that could well be due to my being clueless on how to correctly
tune AFS itself.

When you say zfs do you mean using the NFS export functionality of zfs?

yes
I've never tested that and it would be very interesting to know how well  
it

works.


not yet possible here, but we'll probably migrate to zfs in the not too  
far future.




My personal opinion is that fossil works great over NFS but would caution
anyone trying it to test thoroughly before trusting it.




 could well be worse.


sshfs is an excellent work-around for an expert user but not a  
replacement

for the feature of ssh transport.



yes I would love to see a stable solution not suffering from  
interference

of terminal output (there are people out there loving the good old
`fortune' as part of their login script...).

btw: why could fossil not simply(?) filter a reasonable amount of  
terminal
output for the occurrence of a sufficiently strong magic pattern  
indicating
that the noise has passed by and fossil can go to work? right now  
putting
`echo  ' (sending a single blank) suffices to let the transfer fail.  
my

understanding is that fossil _does_ send something like `echo test' (is
this true). all unexpected output to tty from the login scripts  would  
come
_before_ that so why not test for receiving the expected text ('test'  
just

being not unique/strong enough) at the end of whatever is send (up to a
reasonable length)? is this a stupid idea?



I thought of trying that some time ago but never got around to it.  
Inspired

by your comment I gave a similar approach a quick try and for the first
time I saw ssh work on my home linux box!!!

All I did was read and discard any junk on the line before sending the  
echo

test:

http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0

===without==
rm: cannot remove `*': No such file or directory
make: Nothing to be done for `all'.
ssh matt@xena
Pseudo-terminal will not be allocated because stdin is not a terminal.
../fossil: ssh connection failed: [Welcome to Ubuntu 12.04.1 LTS  
(GNU/Linux

3.2.0-32-generic-pae i686)

 * Documentation:  https://help.ubuntu.com/

0 packages can be updated.
0 updates are security updates.

test]

==with===
fossil/junk$ rm *;(cd ..;make)  ../fossil clone
ssh://matt@xena//home/matt/fossils/fossil.fossil
fossil.fossil
make: Nothing to be done for `all'.
ssh matt@xena
Pseudo-terminal will not be allocated because stdin is not a terminal.
Bytes  Cards  Artifacts Deltas
Sent:  53  1  0  0
Received: 5004225  13950   1751   5238
Sent:  71  2  0  0
Received: 5032480   9827   1742   3132
Sent:  57 93  0  0
Received: 5012028   9872   1137   3806
Sent:  57  1  0  0
Received: 4388872   3053360   1168
Total network traffic: 1037 bytes sent, 19438477 bytes received
Rebuilding repository meta-data...
  100.0% complete...
project-id: CE59BB9F186226D80E49D1FA2DB29F935CCA0333
server-id:  3029a8494152737798f2768c7991921f2342a84b
admin-user: matt (password is 7db8e5)




great. that's 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Matt Welland
I'll top-post an answer to this one as this thread has wandered and gotten
very long, so who knows who is still following :)

I made a simple tweak to the ssh code that gets ssh working for me on
Ubuntu and may solve some of the login shell related problems that have
been reported with respect to ssh:

http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0

Joerg iasked if this will make it into a future release. Can Richard or one
of the developers take a look at the change and comment?

Note that unfortunately this does not fix the issues I'm having with
fsecure ssh but I hope it gets us one step closer.

Thanks,

Matt
-=-


On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff veedeeh...@googlemail.comwrote:

 On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com
 wrote:

  On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
 veedeeh...@googlemail.com**wrote:

  On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland estifo...@gmail.com
 wrote:

  sshfs is cool but in a corporate environment it can't always be used.
 For

 example fuse is not installed for end users on the servers I have access
 to.

 I would also be very wary of sshfs and multi-user access. Sqlite3
 locking
 on NFS doesn't always work well, I imagine that locking issues on sshfs


 it doesn't? in which way? and are the mentioned problems restricted to
 NFS
 or other file systems (zfs, qfs, ...) as well?
 do you mean that a 'central' repository could be harmed if two users try
 to push at the same time (and would corruption propagate to the users'
 local repositories later on)? I do hope not so...



 I should have qualified that with the detail that historically NFS locking
 has been reported as an issue by others but I myself have not seen it.
 What
 I have seen in using sqlite3 and fossil very heavily on NFS is users using
 kill -9 right off the bat rather than first trying with just kill. The
 lock
 gets stuck set and only dumping the sqlite db to text and recreating it
 seems to clear the lock (not sure but maybe sometimes copying to a new
 file
 and moving back will clear the lock).

 I've seen a corrupted db once or maybe twice but never been clear that it
 was caused by concurrent access on NFS or not. Thankfully it is fossil and
 recovery is a cp away.

 Quite some time ago I did limited testing of concurrent access to an
 sqlite3 db on AFS and GFS and it seemed to work fine. The AFS test was
 very
 slow but that could well be due to my being clueless on how to correctly
 tune AFS itself.

 When you say zfs do you mean using the NFS export functionality of zfs?

 yes

  I've never tested that and it would be very interesting to know how well
 it
 works.


 not yet possible here, but we'll probably migrate to zfs in the not too
 far future.



 My personal opinion is that fossil works great over NFS but would caution
 anyone trying it to test thoroughly before trusting it.



  could well be worse.


 sshfs is an excellent work-around for an expert user but not a
 replacement
 for the feature of ssh transport.


 yes I would love to see a stable solution not suffering from interference
 of terminal output (there are people out there loving the good old
 `fortune' as part of their login script...).

 btw: why could fossil not simply(?) filter a reasonable amount of
 terminal
 output for the occurrence of a sufficiently strong magic pattern
 indicating
 that the noise has passed by and fossil can go to work? right now
 putting
 `echo  ' (sending a single blank) suffices to let the transfer fail. my
 understanding is that fossil _does_ send something like `echo test' (is
 this true). all unexpected output to tty from the login scripts  would
 come
 _before_ that so why not test for receiving the expected text ('test'
 just
 being not unique/strong enough) at the end of whatever is send (up to a
 reasonable length)? is this a stupid idea?



 I thought of trying that some time ago but never got around to it.
 Inspired
 by your comment I gave a similar approach a quick try and for the first
 time I saw ssh work on my home linux box!!!

 All I did was read and discard any junk on the line before sending the
 echo
 test:

 http://www.kiatoa.com/cgi-bin/**fossils/fossil/fdiff?v1=**
 935bc0a983135b26v2=**61f9ddf1e2c8bbb0http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0

 ===without==
 rm: cannot remove `*': No such file or directory
 make: Nothing to be done for `all'.
 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 ../fossil: ssh connection failed: [Welcome to Ubuntu 12.04.1 LTS
 (GNU/Linux
 3.2.0-32-generic-pae i686)

  * Documentation:  https://help.ubuntu.com/

 0 packages can be updated.
 0 updates are security updates.

 test]

 ==with**===
 fossil/junk$ rm *;(cd ..;make)  ../fossil clone
 ssh://matt@xena//home/matt/**fossils/fossil.fossil
 fossil.fossil
 make: Nothing to be done 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Richard Hipp
On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland estifo...@gmail.com wrote:

 I'll top-post an answer to this one as this thread has wandered and gotten
 very long, so who knows who is still following :)

 I made a simple tweak to the ssh code that gets ssh working for me on
 Ubuntu and may solve some of the login shell related problems that have
 been reported with respect to ssh:


 http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0


Not exactly the same patch, but something quite similar has been checked in
at http://www.fossil-scm.org/fossil/info/4473a27f3b - please try it out and
let me know if it clears any outstanding problems, or if I missed some
obvious benefit of Matt's patch in my refactoring.




 Joerg iasked if this will make it into a future release. Can Richard or
 one of the developers take a look at the change and comment?

 Note that unfortunately this does not fix the issues I'm having with
 fsecure ssh but I hope it gets us one step closer.

 Thanks,

 Matt
 -=-



 On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff 
 veedeeh...@googlemail.comwrote:

 On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com
 wrote:

  On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
 veedeeh...@googlemail.com**wrote:

  On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland estifo...@gmail.com
 wrote:

  sshfs is cool but in a corporate environment it can't always be used.
 For

 example fuse is not installed for end users on the servers I have
 access
 to.

 I would also be very wary of sshfs and multi-user access. Sqlite3
 locking
 on NFS doesn't always work well, I imagine that locking issues on sshfs


 it doesn't? in which way? and are the mentioned problems restricted to
 NFS
 or other file systems (zfs, qfs, ...) as well?
 do you mean that a 'central' repository could be harmed if two users try
 to push at the same time (and would corruption propagate to the users'
 local repositories later on)? I do hope not so...



 I should have qualified that with the detail that historically NFS
 locking
 has been reported as an issue by others but I myself have not seen it.
 What
 I have seen in using sqlite3 and fossil very heavily on NFS is users
 using
 kill -9 right off the bat rather than first trying with just kill. The
 lock
 gets stuck set and only dumping the sqlite db to text and recreating it
 seems to clear the lock (not sure but maybe sometimes copying to a new
 file
 and moving back will clear the lock).

 I've seen a corrupted db once or maybe twice but never been clear that it
 was caused by concurrent access on NFS or not. Thankfully it is fossil
 and
 recovery is a cp away.

 Quite some time ago I did limited testing of concurrent access to an
 sqlite3 db on AFS and GFS and it seemed to work fine. The AFS test was
 very
 slow but that could well be due to my being clueless on how to correctly
 tune AFS itself.

 When you say zfs do you mean using the NFS export functionality of zfs?

 yes

  I've never tested that and it would be very interesting to know how well
 it
 works.


 not yet possible here, but we'll probably migrate to zfs in the not too
 far future.



 My personal opinion is that fossil works great over NFS but would caution
 anyone trying it to test thoroughly before trusting it.



  could well be worse.


 sshfs is an excellent work-around for an expert user but not a
 replacement
 for the feature of ssh transport.


 yes I would love to see a stable solution not suffering from
 interference
 of terminal output (there are people out there loving the good old
 `fortune' as part of their login script...).

 btw: why could fossil not simply(?) filter a reasonable amount of
 terminal
 output for the occurrence of a sufficiently strong magic pattern
 indicating
 that the noise has passed by and fossil can go to work? right now
 putting
 `echo  ' (sending a single blank) suffices to let the transfer fail.
 my
 understanding is that fossil _does_ send something like `echo test' (is
 this true). all unexpected output to tty from the login scripts  would
 come
 _before_ that so why not test for receiving the expected text ('test'
 just
 being not unique/strong enough) at the end of whatever is send (up to a
 reasonable length)? is this a stupid idea?



 I thought of trying that some time ago but never got around to it.
 Inspired
 by your comment I gave a similar approach a quick try and for the first
 time I saw ssh work on my home linux box!!!

 All I did was read and discard any junk on the line before sending the
 echo
 test:

 http://www.kiatoa.com/cgi-bin/**fossils/fossil/fdiff?v1=**
 935bc0a983135b26v2=**61f9ddf1e2c8bbb0http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0

 ===without==
 rm: cannot remove `*': No such file or directory
 make: Nothing to be done for `all'.
 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 ../fossil: ssh 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Matt Welland
On Sun, Nov 11, 2012 at 3:44 PM, Richard Hipp d...@sqlite.org wrote:



 On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland estifo...@gmail.com wrote:

 I'll top-post an answer to this one as this thread has wandered and
 gotten very long, so who knows who is still following :)

 I made a simple tweak to the ssh code that gets ssh working for me on
 Ubuntu and may solve some of the login shell related problems that have
 been reported with respect to ssh:


 http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0


 Not exactly the same patch, but something quite similar has been checked
 in at http://www.fossil-scm.org/fossil/info/4473a27f3b - please try it
 out and let me know if it clears any outstanding problems, or if I missed
 some obvious benefit of Matt's patch in my refactoring.


It seems not to work in my situation with the sending of test1. I'm not
sure why.

= I get the following 
fossil/junk$ ../fossil clone ssh://matt@xena//home/matt/fossils/fossil.fossil
fossil.fossil
ssh matt@xena
Pseudo-terminal will not be allocated because stdin is not a terminal.
../fossil: ssh connection failed: [test1]








 Joerg iasked if this will make it into a future release. Can Richard or
 one of the developers take a look at the change and comment?

 Note that unfortunately this does not fix the issues I'm having with
 fsecure ssh but I hope it gets us one step closer.

 Thanks,

 Matt
 -=-



 On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff veedeeh...@googlemail.com
  wrote:

 On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com
 wrote:

  On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
 veedeeh...@googlemail.com**wrote:

  On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland estifo...@gmail.com
 wrote:

  sshfs is cool but in a corporate environment it can't always be used.
 For

 example fuse is not installed for end users on the servers I have
 access
 to.

 I would also be very wary of sshfs and multi-user access. Sqlite3
 locking
 on NFS doesn't always work well, I imagine that locking issues on
 sshfs


 it doesn't? in which way? and are the mentioned problems restricted to
 NFS
 or other file systems (zfs, qfs, ...) as well?
 do you mean that a 'central' repository could be harmed if two users
 try
 to push at the same time (and would corruption propagate to the users'
 local repositories later on)? I do hope not so...



 I should have qualified that with the detail that historically NFS
 locking
 has been reported as an issue by others but I myself have not seen it.
 What
 I have seen in using sqlite3 and fossil very heavily on NFS is users
 using
 kill -9 right off the bat rather than first trying with just kill. The
 lock
 gets stuck set and only dumping the sqlite db to text and recreating
 it
 seems to clear the lock (not sure but maybe sometimes copying to a new
 file
 and moving back will clear the lock).

 I've seen a corrupted db once or maybe twice but never been clear that
 it
 was caused by concurrent access on NFS or not. Thankfully it is fossil
 and
 recovery is a cp away.

 Quite some time ago I did limited testing of concurrent access to an
 sqlite3 db on AFS and GFS and it seemed to work fine. The AFS test was
 very
 slow but that could well be due to my being clueless on how to correctly
 tune AFS itself.

 When you say zfs do you mean using the NFS export functionality of zfs?

 yes

  I've never tested that and it would be very interesting to know how
 well it
 works.


 not yet possible here, but we'll probably migrate to zfs in the not too
 far future.



 My personal opinion is that fossil works great over NFS but would
 caution
 anyone trying it to test thoroughly before trusting it.



  could well be worse.


 sshfs is an excellent work-around for an expert user but not a
 replacement
 for the feature of ssh transport.


 yes I would love to see a stable solution not suffering from
 interference
 of terminal output (there are people out there loving the good old
 `fortune' as part of their login script...).

 btw: why could fossil not simply(?) filter a reasonable amount of
 terminal
 output for the occurrence of a sufficiently strong magic pattern
 indicating
 that the noise has passed by and fossil can go to work? right now
 putting
 `echo  ' (sending a single blank) suffices to let the transfer fail.
 my
 understanding is that fossil _does_ send something like `echo test' (is
 this true). all unexpected output to tty from the login scripts  would
 come
 _before_ that so why not test for receiving the expected text ('test'
 just
 being not unique/strong enough) at the end of whatever is send (up to a
 reasonable length)? is this a stupid idea?



 I thought of trying that some time ago but never got around to it.
 Inspired
 by your comment I gave a similar approach a quick try and for the first
 time I saw ssh work on my home linux box!!!

 All I did was read and discard any junk on the line before 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Richard Hipp
On Sun, Nov 11, 2012 at 7:10 PM, Matt Welland estifo...@gmail.com wrote:


 On Sun, Nov 11, 2012 at 3:44 PM, Richard Hipp d...@sqlite.org wrote:



 On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland estifo...@gmail.comwrote:

 I'll top-post an answer to this one as this thread has wandered and
 gotten very long, so who knows who is still following :)

 I made a simple tweak to the ssh code that gets ssh working for me on
 Ubuntu and may solve some of the login shell related problems that have
 been reported with respect to ssh:


 http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0


 Not exactly the same patch, but something quite similar has been checked
 in at http://www.fossil-scm.org/fossil/info/4473a27f3b - please try it
 out and let me know if it clears any outstanding problems, or if I missed
 some obvious benefit of Matt's patch in my refactoring.


 It seems not to work in my situation with the sending of test1. I'm not
 sure why.


The trunk changes works here.  And I don't see how it is materially
different from your patch.  Am I overlooking something?



 = I get the following 
 fossil/junk$ ../fossil clone ssh://matt@xena//home/matt/fossils/fossil.fossil
 fossil.fossil

 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 ../fossil: ssh connection failed: [test1]








 Joerg iasked if this will make it into a future release. Can Richard or
 one of the developers take a look at the change and comment?

 Note that unfortunately this does not fix the issues I'm having with
 fsecure ssh but I hope it gets us one step closer.

 Thanks,

 Matt
 -=-



 On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff 
 veedeeh...@googlemail.com wrote:

 On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com
 wrote:

  On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
 veedeeh...@googlemail.com**wrote:

  On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland estifo...@gmail.com
 
 wrote:

  sshfs is cool but in a corporate environment it can't always be
 used. For

 example fuse is not installed for end users on the servers I have
 access
 to.

 I would also be very wary of sshfs and multi-user access. Sqlite3
 locking
 on NFS doesn't always work well, I imagine that locking issues on
 sshfs


 it doesn't? in which way? and are the mentioned problems restricted
 to NFS
 or other file systems (zfs, qfs, ...) as well?
 do you mean that a 'central' repository could be harmed if two users
 try
 to push at the same time (and would corruption propagate to the users'
 local repositories later on)? I do hope not so...



 I should have qualified that with the detail that historically NFS
 locking
 has been reported as an issue by others but I myself have not seen it.
 What
 I have seen in using sqlite3 and fossil very heavily on NFS is users
 using
 kill -9 right off the bat rather than first trying with just kill. The
 lock
 gets stuck set and only dumping the sqlite db to text and recreating
 it
 seems to clear the lock (not sure but maybe sometimes copying to a new
 file
 and moving back will clear the lock).

 I've seen a corrupted db once or maybe twice but never been clear that
 it
 was caused by concurrent access on NFS or not. Thankfully it is fossil
 and
 recovery is a cp away.

 Quite some time ago I did limited testing of concurrent access to an
 sqlite3 db on AFS and GFS and it seemed to work fine. The AFS test was
 very
 slow but that could well be due to my being clueless on how to
 correctly
 tune AFS itself.

 When you say zfs do you mean using the NFS export functionality of zfs?

 yes

  I've never tested that and it would be very interesting to know how
 well it
 works.


 not yet possible here, but we'll probably migrate to zfs in the not too
 far future.



 My personal opinion is that fossil works great over NFS but would
 caution
 anyone trying it to test thoroughly before trusting it.



  could well be worse.


 sshfs is an excellent work-around for an expert user but not a
 replacement
 for the feature of ssh transport.


 yes I would love to see a stable solution not suffering from
 interference
 of terminal output (there are people out there loving the good old
 `fortune' as part of their login script...).

 btw: why could fossil not simply(?) filter a reasonable amount of
 terminal
 output for the occurrence of a sufficiently strong magic pattern
 indicating
 that the noise has passed by and fossil can go to work? right now
 putting
 `echo  ' (sending a single blank) suffices to let the transfer
 fail. my
 understanding is that fossil _does_ send something like `echo test'
 (is
 this true). all unexpected output to tty from the login scripts
  would come
 _before_ that so why not test for receiving the expected text ('test'
 just
 being not unique/strong enough) at the end of whatever is send (up to
 a
 reasonable length)? is this a stupid idea?



 I thought of trying that some time ago 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Matt Welland
Comparison of your fix vs. my hack below. I suspect that blindly clearing
out the buffer of any line noise before sending anything to the remote end
will work better but I have no logic or solid arguments to back up that
assertion.

=
matt@xena:~/data/fossil/junk$ fsl info
project-name: Fossil
repository:   /home/matt/fossils/fossil.fossil
local-root:   /home/matt/data/fossil/
project-code: CE59BB9F186226D80E49D1FA2DB29F935CCA0333
checkout: 4473a27f3b6e049e3c162e440e0e4c87daf9570c 2012-11-11 22:42:50
UTC
parent:   8c7faee6c5fac25b8456e96070ce068400d1d7e1 2012-11-11 17:59:42
UTC
tags: trunk
comment:  Further attempts to help the ssh sync protocol move past
noisy
  motd comments and other extraneous login text, synchronize
with
  the remote end, and start exchanging messages successfully.
  (user: drh)
matt@xena:~/data/fossil/junk$ rm -f fossil*;(cd ..;make)  ../fossil clone
ssh://matt@xena//home/matt/fossils/fossil.fossil fossil.fossil
make: Nothing to be done for `all'.
ssh matt@xena
Pseudo-terminal will not be allocated because stdin is not a terminal.
../fossil: ssh connection failed: [test1]
=

matt@xena:~/data/fossil/junk$ rm -f fossil*;(cd ..;make  make.log) 
../fossil clone ssh://matt@xena//home/matt/fossils/fossil.fossil
fossil.fossil
ssh matt@xena
Pseudo-terminal will not be allocated because stdin is not a terminal.
Bytes  Cards  Artifacts Deltas
Sent:  53  1  0  0
Received: 5004225  13950   1751   5238
Sent:  71  2  0  0
Received: 5032480   9827   1742   3132
Sent:  57 93  0  0
Received: 5012028   9872   1137   3806
Sent:  57  1  0  0
Received: 4422156   3069367   1169
Total network traffic: 1035 bytes sent, 19471761 bytes received
Rebuilding repository meta-data...
  100.0% complete...
project-id: CE59BB9F186226D80E49D1FA2DB29F935CCA0333
server-id:  3e5f8ed7b0eed8a144fa4b07b4b34cc6c374d20c
admin-user: matt (password is 40faae)





On Sun, Nov 11, 2012 at 6:09 PM, Richard Hipp d...@sqlite.org wrote:



 On Sun, Nov 11, 2012 at 7:10 PM, Matt Welland estifo...@gmail.com wrote:


 On Sun, Nov 11, 2012 at 3:44 PM, Richard Hipp d...@sqlite.org wrote:



 On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland estifo...@gmail.comwrote:

 I'll top-post an answer to this one as this thread has wandered and
 gotten very long, so who knows who is still following :)

 I made a simple tweak to the ssh code that gets ssh working for me on
 Ubuntu and may solve some of the login shell related problems that have
 been reported with respect to ssh:


 http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0


 Not exactly the same patch, but something quite similar has been checked
 in at http://www.fossil-scm.org/fossil/info/4473a27f3b - please try it
 out and let me know if it clears any outstanding problems, or if I missed
 some obvious benefit of Matt's patch in my refactoring.


 It seems not to work in my situation with the sending of test1. I'm not
 sure why.


 The trunk changes works here.  And I don't see how it is materially
 different from your patch.  Am I overlooking something?



 = I get the following 
 fossil/junk$ ../fossil clone ssh://matt@xena//home/matt/fossils/fossil.fossil
 fossil.fossil

 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 ../fossil: ssh connection failed: [test1]








 Joerg iasked if this will make it into a future release. Can Richard or
 one of the developers take a look at the change and comment?

 Note that unfortunately this does not fix the issues I'm having with
 fsecure ssh but I hope it gets us one step closer.

 Thanks,

 Matt
 -=-



 On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff 
 veedeeh...@googlemail.com wrote:

 On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com
 wrote:

  On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
 veedeeh...@googlemail.com**wrote:

  On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland 
 estifo...@gmail.com
 wrote:

  sshfs is cool but in a corporate environment it can't always be
 used. For

 example fuse is not installed for end users on the servers I have
 access
 to.

 I would also be very wary of sshfs and multi-user access. Sqlite3
 locking
 on NFS doesn't always work well, I imagine that locking issues on
 sshfs


 it doesn't? in which way? and are the mentioned problems restricted
 to NFS
 or other file systems (zfs, qfs, ...) as well?
 do you mean that a 'central' repository could be harmed if two users
 try
 to push at the same time (and would corruption propagate to the
 users'
 local repositories later on)? I do hope not so...



 I should 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-11 Thread Richard Hipp
On Sun, Nov 11, 2012 at 8:25 PM, Matt Welland estifo...@gmail.com wrote:

 Comparison of your fix vs. my hack below. I suspect that blindly clearing
 out the buffer of any line noise before sending anything to the remote end
 will work better but I have no logic or solid arguments to back up that
 assertion.


Both versions send two echo commands to the remote side, ignore the
return from the first echo and check the return from the second.  The only
difference that I see between your patch and mine (unless I'm missing
something) is that I'm sending different echo text.  What do you see that
is different from this?




 =
 matt@xena:~/data/fossil/junk$ fsl info
 project-name: Fossil
 repository:   /home/matt/fossils/fossil.fossil
 local-root:   /home/matt/data/fossil/
 project-code: CE59BB9F186226D80E49D1FA2DB29F935CCA0333
 checkout: 4473a27f3b6e049e3c162e440e0e4c87daf9570c 2012-11-11 22:42:50
 UTC
 parent:   8c7faee6c5fac25b8456e96070ce068400d1d7e1 2012-11-11 17:59:42
 UTC
 tags: trunk
 comment:  Further attempts to help the ssh sync protocol move past
 noisy
   motd comments and other extraneous login text, synchronize
 with
   the remote end, and start exchanging messages successfully.
   (user: drh)
 matt@xena:~/data/fossil/junk$ rm -f fossil*;(cd ..;make)  ../fossil
 clone ssh://matt@xena//home/matt/fossils/fossil.fossil fossil.fossil

 make: Nothing to be done for `all'.
 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 ../fossil: ssh connection failed: [test1]
 =

 matt@xena:~/data/fossil/junk$ rm -f fossil*;(cd ..;make  make.log) 
 ../fossil clone ssh://matt@xena//home/matt/fossils/fossil.fossil
 fossil.fossil

 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 Bytes  Cards  Artifacts Deltas
 Sent:  53  1  0  0
 Received: 5004225  13950   1751   5238
 Sent:  71  2  0  0
 Received: 5032480   9827   1742   3132
 Sent:  57 93  0  0
 Received: 5012028   9872   1137   3806
 Sent:  57  1  0  0
  Received: 4422156   3069367   1169
 Total network traffic: 1035 bytes sent, 19471761 bytes received

 Rebuilding repository meta-data...
   100.0% complete...
 project-id: CE59BB9F186226D80E49D1FA2DB29F935CCA0333
  server-id:  3e5f8ed7b0eed8a144fa4b07b4b34cc6c374d20c
 admin-user: matt (password is 40faae)






 On Sun, Nov 11, 2012 at 6:09 PM, Richard Hipp d...@sqlite.org wrote:



 On Sun, Nov 11, 2012 at 7:10 PM, Matt Welland estifo...@gmail.comwrote:


 On Sun, Nov 11, 2012 at 3:44 PM, Richard Hipp d...@sqlite.org wrote:



 On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland estifo...@gmail.comwrote:

 I'll top-post an answer to this one as this thread has wandered and
 gotten very long, so who knows who is still following :)

 I made a simple tweak to the ssh code that gets ssh working for me on
 Ubuntu and may solve some of the login shell related problems that have
 been reported with respect to ssh:


 http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26v2=61f9ddf1e2c8bbb0


 Not exactly the same patch, but something quite similar has been
 checked in at http://www.fossil-scm.org/fossil/info/4473a27f3b -
 please try it out and let me know if it clears any outstanding problems, or
 if I missed some obvious benefit of Matt's patch in my refactoring.


 It seems not to work in my situation with the sending of test1. I'm not
 sure why.


 The trunk changes works here.  And I don't see how it is materially
 different from your patch.  Am I overlooking something?



 = I get the following 
 fossil/junk$ ../fossil clone 
 ssh://matt@xena//home/matt/fossils/fossil.fossil
 fossil.fossil

 ssh matt@xena
 Pseudo-terminal will not be allocated because stdin is not a terminal.
 ../fossil: ssh connection failed: [test1]








 Joerg iasked if this will make it into a future release. Can Richard
 or one of the developers take a look at the change and comment?

 Note that unfortunately this does not fix the issues I'm having with
 fsecure ssh but I hope it gets us one step closer.

 Thanks,

 Matt
 -=-



 On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff 
 veedeeh...@googlemail.com wrote:

 On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland estifo...@gmail.com
 wrote:

  On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff
 veedeeh...@googlemail.com**wrote:

  On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland 
 estifo...@gmail.com
 wrote:

  sshfs is cool but in a corporate environment it can't always be
 used. For

 example fuse is not installed for end users on the servers I have
 access
 to.

 I would also be very wary of sshfs and 

Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-10 Thread Richard Hipp
On Sat, Nov 10, 2012 at 9:43 AM, j. van den hoff
veedeeh...@googlemail.comwrote:

 I would really appreciate if the ssh issue could get addressed by the
 developers.


It has my attention.  I just don't know what to do about it.  Do you have
any suggestions on how to improve the way the SSH method operates?

-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-10 Thread j. van den hoff

On Sat, 10 Nov 2012 15:48:11 +0100, Richard Hipp d...@sqlite.org wrote:


On Sat, Nov 10, 2012 at 9:43 AM, j. van den hoff
veedeeh...@googlemail.comwrote:



whoa! that's a quick reply ;-).


I would really appreciate if the ssh issue could get addressed by the
developers.



It has my attention.  I just don't know what to do about it.  Do you have


no offense meant (I was not implying no one cares..).


any suggestions on how to improve the way the SSH method operates?


unfortunately, no: I neither know too much ssh internals nor how fossil  
operates over ssh. I really spent quite some time these last days hunting  
around the net and did everything which was recommended (silencing the  
login, avoiding tcsh). ssh itself works fine (and does so, e.g., both with  
svn and mercurial) for me.


from what I've read I was not able to understand what the problem really  
is: what is different in fossil's usage of ssh compared, e.g. to mercurial?
I seem to understand that fossil is confused by chatty login messages, but  
I've got rid of them without the problem going away. what happens between

the ssh message

debug2: shell request accepted on channel 0

and

fossil: ssh connection failed: []

could e.g. a `ssh-debug' flag be added to `fossil set' which enables a  
comprehensive log of relevant information (on the client side? on the  
server side?) to
better understand what's going on? I would be happy to provide such  
logging information (as long as no privacy/security concerns are involved)  
or is the principle reason of failure known anyway?


frequently, there are some searchpath issues (e.g. non-interactive vs.  
interactive shell, path updated/augmented at different places (system wide  
and local .bashrc .profile etc.)). maybe something like `fsl set  
path_to_remote_fossil' (or added to the ssh-command) could help to  
definitely get fossil started on the other side?


just guessing, of course...

joerg






--
Using Opera's revolutionary email client: http://www.opera.com/mail/
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-10 Thread Martin Gagnon
On Sat, Nov 10, 2012 at 9:48 AM, Richard Hipp d...@sqlite.org wrote:


 On Sat, Nov 10, 2012 at 9:43 AM, j. van den hoff veedeeh...@googlemail.com
 wrote:

 I would really appreciate if the ssh issue could get addressed by the
 developers.


 It has my attention.  I just don't know what to do about it.  Do you have
 any suggestions on how to improve the way the SSH method operates?


I think someone have proposed the ideal solution previously in this
list (I didn't find it on the mailing list archive, yet)

It would be to execute directly fossil throught ssh with a new fossil
command which would act similar as fossil test-http, except that it
would process all chunks of sync data until the whole sync/push/pull
is completed.

That way fossil on local side would be directly connected to the
remote fossil pipe and nothing from shell or login session could be in
the way.

Let say the new fossil command is: fossil ssh-http
So instead of invoking: ssh -T -e none remotehost, fossil would do:
fossil remotehost fossil ssh-http

When specifying a command to the ssh command line, stdin and stdout
are directly connected to the invoked command, so you cannot have
gargabe between.

I beleive it's how other scm use ssh...

Regards

-- 
Martin G.
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-10 Thread Richard Hipp
On Sat, Nov 10, 2012 at 11:53 AM, Martin Gagnon eme...@gmail.com wrote:

 On Sat, Nov 10, 2012 at 9:48 AM, Richard Hipp d...@sqlite.org wrote:
 
 
  On Sat, Nov 10, 2012 at 9:43 AM, j. van den hoff 
 veedeeh...@googlemail.com
  wrote:
 
  I would really appreciate if the ssh issue could get addressed by the
  developers.
 
 
  It has my attention.  I just don't know what to do about it.  Do you have
  any suggestions on how to improve the way the SSH method operates?
 

 I think someone have proposed the ideal solution previously in this
 list (I didn't find it on the mailing list archive, yet)

 It would be to execute directly fossil throught ssh with a new fossil
 command which would act similar as fossil test-http, except that it
 would process all chunks of sync data until the whole sync/push/pull
 is completed.


http://www.fossil-scm.org/fossil/artifact/935bc0a983135b?ln=192

That's what the current code does.  Any ideas on how to make it more robust
in the face of varying SSH implementations?



 That way fossil on local side would be directly connected to the
 remote fossil pipe and nothing from shell or login session could be in
 the way.

 Let say the new fossil command is: fossil ssh-http
 So instead of invoking: ssh -T -e none remotehost, fossil would do:
 fossil remotehost fossil ssh-http

 When specifying a command to the ssh command line, stdin and stdout
 are directly connected to the invoked command, so you cannot have
 gargabe between.

 I beleive it's how other scm use ssh...

 Regards

 --
 Martin G.
 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users




-- 
D. Richard Hipp
d...@sqlite.org
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-10 Thread Martin Gagnon
On Sat, Nov 10, 2012 at 2:18 PM, Richard Hipp d...@sqlite.org wrote:


 On Sat, Nov 10, 2012 at 11:53 AM, Martin Gagnon eme...@gmail.com wrote:

 On Sat, Nov 10, 2012 at 9:48 AM, Richard Hipp d...@sqlite.org wrote:
 
 
  On Sat, Nov 10, 2012 at 9:43 AM, j. van den hoff
  veedeeh...@googlemail.com
  wrote:
 
  I would really appreciate if the ssh issue could get addressed by the
  developers.
 
 
  It has my attention.  I just don't know what to do about it.  Do you
  have
  any suggestions on how to improve the way the SSH method operates?
 

 I think someone have proposed the ideal solution previously in this
 list (I didn't find it on the mailing list archive, yet)

 It would be to execute directly fossil throught ssh with a new fossil
 command which would act similar as fossil test-http, except that it
 would process all chunks of sync data until the whole sync/push/pull
 is completed.


 http://www.fossil-scm.org/fossil/artifact/935bc0a983135b?ln=192

 That's what the current code does.  Any ideas on how to make it more robust
 in the face of varying SSH implementations?


I think the problem is that now, it use the shell, it doesn't call
fossil on the remote side directly from the ssh command line.

To be more robust, the test-http command (more precicely the new
command I call ssh-http in my example) should be append to zCmd here:

http://www.fossil-scm.org/fossil/artifact/935bc0a983135b?ln=154

That will connect stdin and stdout *directly* to the remote fossil
command. If you do that, you don't need to care about shell and login
noise. *But* in that case, fossil on remote side will have to handle
the whole sync operation by himself.. (it's like the ssh-http command
call what test-http does multiple time).  (for what I understand from
what test-http does)


[snip]

-- 
Martin G.
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-10 Thread Ramon Ribó
Hello,

One simple solution, if using Linux, is sshfs. It creates a remote file
system in the local computer. Then, You can sync as if the remote file were
a local file. It works nicely.

Maybe the fossil ssh implementation can get some ideas from this package.

Ramon Ribó


2012/11/10 Martin Gagnon eme...@gmail.com

 On Sat, Nov 10, 2012 at 2:18 PM, Richard Hipp d...@sqlite.org wrote:
 
 
  On Sat, Nov 10, 2012 at 11:53 AM, Martin Gagnon eme...@gmail.com
 wrote:
 
  On Sat, Nov 10, 2012 at 9:48 AM, Richard Hipp d...@sqlite.org wrote:
  
  
   On Sat, Nov 10, 2012 at 9:43 AM, j. van den hoff
   veedeeh...@googlemail.com
   wrote:
  
   I would really appreciate if the ssh issue could get addressed by the
   developers.
  
  
   It has my attention.  I just don't know what to do about it.  Do you
   have
   any suggestions on how to improve the way the SSH method operates?
  
 
  I think someone have proposed the ideal solution previously in this
  list (I didn't find it on the mailing list archive, yet)
 
  It would be to execute directly fossil throught ssh with a new fossil
  command which would act similar as fossil test-http, except that it
  would process all chunks of sync data until the whole sync/push/pull
  is completed.
 
 
  http://www.fossil-scm.org/fossil/artifact/935bc0a983135b?ln=192
 
  That's what the current code does.  Any ideas on how to make it more
 robust
  in the face of varying SSH implementations?
 

 I think the problem is that now, it use the shell, it doesn't call
 fossil on the remote side directly from the ssh command line.

 To be more robust, the test-http command (more precicely the new
 command I call ssh-http in my example) should be append to zCmd here:

 http://www.fossil-scm.org/fossil/artifact/935bc0a983135b?ln=154

 That will connect stdin and stdout *directly* to the remote fossil
 command. If you do that, you don't need to care about shell and login
 noise. *But* in that case, fossil on remote side will have to handle
 the whole sync operation by himself.. (it's like the ssh-http command
 call what test-http does multiple time).  (for what I understand from
 what test-http does)


 [snip]

 --
 Martin G.
 ___
 fossil-users mailing list
 fossil-users@lists.fossil-scm.org
 http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-10 Thread j. van den hoff

thanks for responding.

On Sat, 10 Nov 2012 17:53:40 +0100, Martin Gagnon eme...@gmail.com wrote:


On Sat, Nov 10, 2012 at 9:48 AM, Richard Hipp d...@sqlite.org wrote:



On Sat, Nov 10, 2012 at 9:43 AM, j. van den hoff  
veedeeh...@googlemail.com

wrote:


I would really appreciate if the ssh issue could get addressed by the
developers.



It has my attention.  I just don't know what to do about it.  Do you  
have

any suggestions on how to improve the way the SSH method operates?



I think someone have proposed the ideal solution previously in this
list (I didn't find it on the mailing list archive, yet)

It would be to execute directly fossil throught ssh with a new fossil
command which would act similar as fossil test-http, except that it
would process all chunks of sync data until the whole sync/push/pull
is completed.

That way fossil on local side would be directly connected to the
remote fossil pipe and nothing from shell or login session could be in
the way.

Let say the new fossil command is: fossil ssh-http
So instead of invoking: ssh -T -e none remotehost, fossil would do:
fossil remotehost fossil ssh-http

When specifying a command to the ssh command line, stdin and stdout
are directly connected to the invoked command, so you cannot have
gargabe between.

I beleive it's how other scm use ssh...


thanks for clarifying. I did some further testing and finally managed to
make my setup compliant with fossil's demands.

I consider to be the following describing the state of affairs (valid for
a recent +bash+ on recent ubuntu -- with +tcsh+ it seems not to work  
anyway):


* fossil does indeed an interactive ssh login. that implies
- the user's +.profile+ is processed (in agreement with the +bash+ manpage)
- the user's +.bashrc+ is _not_ processed (seemingly in _disagreement --  
if I read it correctly -- with the +bash+ manpage)


* after +.profile+ is processed, +fossil+ needs to be on the search path.  
setting the path in +.bashrc+ is too late


* _any_ tty ouput during the login procedure must be strictly prevented  
either from the system (MOTD) or from +.profile+
- at least on ubuntu the former (system message) can be prevented by doing  
+touch .hushlogin+ in the user's home directory.
- the former (tty output from +.profile+) can occur in unexpected (for me  
at least) ways, even unprintable characters can

do harm: for instance, in my interactive
setup I put the name of the working directory in the title bar of the  
terminal automatically after each cd by using
the corresponding xterm escape sequences. this is done, too, in .profile  
in order to immediately have the correct title bar.
the catch is: these escape sequences are send to stdout just like text and  
so they prevent fossil from working correctly.
but funny enough, this is only the case for +bash+: with +ksh+ sending  
these escape sequences does no harm (while echoing

a single blank still does). my conclusion is: it is rather unfortunate that
+fossil+ relies on a bare bones interactive set up of the user's  
account. minor fiddling with the setup can break everything.


I would say the bottom line from the above obviously is that using  
interactive login is really fragile: (and I believe that's what you were  
saying in the first place, right?). if it's true that the other scm avoid  
this problem by avoiding interference of the shell, than that sure is  
better in my view (as already stated: despite the fancy escape sequences  
produced in my login scripts, +svn+ and +hg+ don't care at all and just  
work -- with tcsh, too).


I'm nevertheless happy that I can report the problem to be solved for me:  
in the end using +ksh+ as login shell on the remote machine for now seems  
the best solution for me (since with bash fossil still chokes on the  
mentioned escape sequences).


what I would think to be a good idea (as long as no real solution is in  
place): to put a clear howto make syncing over ssh work on the fossil  
webpage clearly stating that the login has to be _absolutely_ silent and  
that (seemingly) only bash and ksh (zsh, too?) work, but tcsh doesn't.  
might save new users
like me a lot of time in the future and provide on average better initial  
experience with +fossil+ ;-).


good luck
joerg

--
Using Opera's revolutionary email client: http://www.opera.com/mail/
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-10 Thread j. van den hoff

thanks for responding.
I managed to solve my problem in the meantime (see my previous mail in  
this thread), but I'll make a memo of sshfs and have a look at it.


joerg

On Sat, 10 Nov 2012 21:04:07 +0100, Ramon Ribó ram...@compassis.com  
wrote:



Hello,

One simple solution, if using Linux, is sshfs. It creates a remote file
system in the local computer. Then, You can sync as if the remote file  
were

a local file. It works nicely.

Maybe the fossil ssh implementation can get some ideas from this package.

Ramon Ribó


2012/11/10 Martin Gagnon eme...@gmail.com


On Sat, Nov 10, 2012 at 2:18 PM, Richard Hipp d...@sqlite.org wrote:


 On Sat, Nov 10, 2012 at 11:53 AM, Martin Gagnon eme...@gmail.com
wrote:

 On Sat, Nov 10, 2012 at 9:48 AM, Richard Hipp d...@sqlite.org wrote:
 
 
  On Sat, Nov 10, 2012 at 9:43 AM, j. van den hoff
  veedeeh...@googlemail.com
  wrote:
 
  I would really appreciate if the ssh issue could get addressed by  
the

  developers.
 
 
  It has my attention.  I just don't know what to do about it.  Do  
you

  have
  any suggestions on how to improve the way the SSH method operates?
 

 I think someone have proposed the ideal solution previously in this
 list (I didn't find it on the mailing list archive, yet)

 It would be to execute directly fossil throught ssh with a new fossil
 command which would act similar as fossil test-http, except that it
 would process all chunks of sync data until the whole sync/push/pull
 is completed.


 http://www.fossil-scm.org/fossil/artifact/935bc0a983135b?ln=192

 That's what the current code does.  Any ideas on how to make it more
robust
 in the face of varying SSH implementations?


I think the problem is that now, it use the shell, it doesn't call
fossil on the remote side directly from the ssh command line.

To be more robust, the test-http command (more precicely the new
command I call ssh-http in my example) should be append to zCmd here:

http://www.fossil-scm.org/fossil/artifact/935bc0a983135b?ln=154

That will connect stdin and stdout *directly* to the remote fossil
command. If you do that, you don't need to care about shell and login
noise. *But* in that case, fossil on remote side will have to handle
the whole sync operation by himself.. (it's like the ssh-http command
call what test-http does multiple time).  (for what I understand from
what test-http does)


[snip]

--
Martin G.
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users




--
Using Opera's revolutionary email client: http://www.opera.com/mail/
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users


Re: [fossil-users] fossil 1.24 not working via ssh

2012-11-10 Thread Timothy Beyer
At Sat, 10 Nov 2012 22:31:57 +0100,
j. van den hoff wrote:
 
 thanks for responding.
 I managed to solve my problem in the meantime (see my previous mail in  
 this thread), but I'll make a memo of sshfs and have a look at it.
 
 joerg
 

Sshfs didn't fix the problems that I was having with fossil+ssh, or at least 
only did so partially.  Though, the problems that I was having with ssh were 
different.

What I'd recommend doing is tunneling http or https through ssh, and host all 
of your fossil repositories on the host computer on your web server of choice 
via cgi.  I do that with lighttpd, and it works flawlessly.

Tim
___
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users