On Mon, Nov 12, 2012 at 8:36 AM, j. van den hoff <[email protected]>wrote:
> I've tested this here (with Fossil-4473a27f3b6e049e) but can only report > partial success: > > * it works using bash/ksh as login shell on the remote machine _if_ there > is not too much text (the allocated buffer size (50000) is still rather > modest but usually sure sufficient) coming in from `.profile' over the ssh > connection. so that's a clear step forward. however: > > * it does _not_ work if the default verbose (multi-line/blank line > separated multi-paragraph, but much shorter than 50000 bytes) ubuntu motd > stuff comes in. the (visible) offending text looks something like this: > Please try again using http://www.fossil-scm.org/fossil/info/00cf858afe and let me know if the situation improves. If it still is not working, please run with the --sshtrace command-line option and send me the diagnostic output. Thanks. > > 8<----------------------------**------------------------------**---- > Welcome to Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-32-generic x86_64) > > * Documentation: https://help.ubuntu.com/ > > System information as of Mon Nov 12 13:20:41 CET 2012 > > System load: 0.04 Processes: 114 > Usage of /: 72.3% of 9.96GB Users logged in: 2 > Memory usage: 22% IP address for eth0: 123.456.78.90 > Swap usage: 0% > > Graph this data and manage this system at https://landscape.canonical.** > com/ <https://landscape.canonical.com/> > > > 0 packages can be updated. > 0 updates are security updates. > 8<----------------------------**------------------------------**---- > > - what's strange is: if I copy this text into an `echo' command within > `.profile' and then deactivate the MOTD (so seeminly getting the same stuff > send over the > ssh connection during login), it works flawlessly!?. my guess would be > that there are some unprintable characters/escapes sent as well which I do > not see > so that copying the MOTD to `.profile' is not really the same thing as > what is happening when ubuntu sends the stuff. > > * it also does _not_ work (with bash that is: ksh keeps working) if I > myself send some escape sequences from my login scripts (as mentioned in a > previous mail intended to dynamically adjust my xterm-titlebars). what's > happening here is completely unclear to me, since it seems bash specific. > what's worse: > issuing the respective `echo' directly in the script instead of within a > shell-function (as is usually done in my setup) does not lead to a failure. > my setup might be somewhat esoteric here, so maybe it's not too > important, but it indicates of course that there still is something > fundamentally not OK. > > * and it does not at all work with tcsh as login shell on the remote > machine (even if login is completely silent). in this case I get the error > message > tput: No value for $TERM and no -T specified > TERM: Undefined variable. > Fossil-4473a27f3b6e049e/**fossil: ssh connection failed: [test1 > probe-4f5d9ab4] > so, seemingly `tcsh' users are out of luck anyway. > > questions: > > * maybe the (echo/flush) process has to be iterated one further time to > make fossil happy with ubuntu's motd (after all it's not the least frequent > linux distro)? > > * could fossil (or a debug version) not provide a (additional) hexdump (a > la `hexdump -C' on linux) of the content of `zIn' instead of using > `fossil_fatal("ssh connection failed: [%s]", zIn);'? in this way one > might be able to at least to recognize what exactly is coming in which > might help in tracking > down the source of the trouble: it need not be printable characters > coming over the ssh connection after all. > > > j. > > > > On Sun, 11 Nov 2012 23:44:31 +0100, Richard Hipp <[email protected]> wrote: > > On Sun, Nov 11, 2012 at 5:11 PM, Matt Welland <[email protected]> >> wrote: >> >> I'll top-post an answer to this one as this thread has wandered and >>> gotten >>> very long, so who knows who is still following :) >>> >>> I made a simple tweak to the ssh code that gets ssh working for me on >>> Ubuntu and may solve some of the login shell related problems that have >>> been reported with respect to ssh: >>> >>> >>> http://www.kiatoa.com/cgi-bin/**fossils/fossil/fdiff?v1=** >>> 935bc0a983135b26&v2=**61f9ddf1e2c8bbb0<http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26&v2=61f9ddf1e2c8bbb0> >>> >>> >> Not exactly the same patch, but something quite similar has been checked >> in >> at >> http://www.fossil-scm.org/**fossil/info/4473a27f3b<http://www.fossil-scm.org/fossil/info/4473a27f3b>- >> please try it out and >> let me know if it clears any outstanding problems, or if I missed some >> obvious benefit of Matt's patch in my refactoring. >> >> >> >> >>> Joerg iasked if this will make it into a future release. Can Richard or >>> one of the developers take a look at the change and comment? >>> >>> Note that unfortunately this does not fix the issues I'm having with >>> fsecure ssh but I hope it gets us one step closer. >>> >>> Thanks, >>> >>> Matt >>> -=- >>> >>> >>> >>> On Sun, Nov 11, 2012 at 1:53 PM, j. v. d. hoff < >>> [email protected]>**wrote: >>> >>> On Sun, 11 Nov 2012 19:35:25 +0100, Matt Welland <[email protected]> >>>> wrote: >>>> >>>> On Sun, Nov 11, 2012 at 2:56 AM, j. van den hoff >>>> >>>>> <[email protected]>****wrote: >>>>> >>>>> >>>>> On Sun, 11 Nov 2012 10:39:27 +0100, Matt Welland <[email protected] >>>>> > >>>>> >>>>>> wrote: >>>>>> >>>>>> sshfs is cool but in a corporate environment it can't always be used. >>>>>> For >>>>>> >>>>>> example fuse is not installed for end users on the servers I have >>>>>>> access >>>>>>> to. >>>>>>> >>>>>>> I would also be very wary of sshfs and multi-user access. Sqlite3 >>>>>>> locking >>>>>>> on NFS doesn't always work well, I imagine that locking issues on >>>>>>> sshfs >>>>>>> >>>>>>> >>>>>>> it doesn't? in which way? and are the mentioned problems restricted >>>>>> to >>>>>> NFS >>>>>> or other file systems (zfs, qfs, ...) as well? >>>>>> do you mean that a 'central' repository could be harmed if two users >>>>>> try >>>>>> to push at the same time (and would corruption propagate to the users' >>>>>> "local" repositories later on)? I do hope not so... >>>>>> >>>>>> >>>>> >>>>> I should have qualified that with the detail that historically NFS >>>>> locking >>>>> has been reported as an issue by others but I myself have not seen it. >>>>> What >>>>> I have seen in using sqlite3 and fossil very heavily on NFS is users >>>>> using >>>>> kill -9 right off the bat rather than first trying with just kill. The >>>>> lock >>>>> gets stuck "set" and only dumping the sqlite db to text and recreating >>>>> it >>>>> seems to clear the lock (not sure but maybe sometimes copying to a new >>>>> file >>>>> and moving back will clear the lock). >>>>> >>>>> I've seen a corrupted db once or maybe twice but never been clear that >>>>> it >>>>> was caused by concurrent access on NFS or not. Thankfully it is fossil >>>>> and >>>>> recovery is a "cp" away. >>>>> >>>>> Quite some time ago I did limited testing of concurrent access to an >>>>> sqlite3 db on AFS and GFS and it seemed to work fine. The AFS test was >>>>> very >>>>> slow but that could well be due to my being clueless on how to >>>>> correctly >>>>> tune AFS itself. >>>>> >>>>> When you say zfs do you mean using the NFS export functionality of zfs? >>>>> >>>>> yes >>>> >>>> I've never tested that and it would be very interesting to know how >>>> well >>>> >>>>> it >>>>> works. >>>>> >>>>> >>>> not yet possible here, but we'll probably migrate to zfs in the not too >>>> far future. >>>> >>>> >>>> >>>> My personal opinion is that fossil works great over NFS but would >>>>> caution >>>>> anyone trying it to test thoroughly before trusting it. >>>>> >>>>> >>>>> >>>>> could well be worse. >>>>>> >>>>>> >>>>>>> sshfs is an excellent work-around for an expert user but not a >>>>>>> replacement >>>>>>> for the feature of ssh transport. >>>>>>> >>>>>>> >>>>>>> yes I would love to see a stable solution not suffering from >>>>>> interference >>>>>> of terminal output (there are people out there loving the good old >>>>>> `fortune' as part of their login script...). >>>>>> >>>>>> btw: why could fossil not simply(?) filter a reasonable amount of >>>>>> terminal >>>>>> output for the occurrence of a sufficiently strong magic pattern >>>>>> indicating >>>>>> that the "noise" has passed by and fossil can go to work? right now >>>>>> putting >>>>>> `echo " "' (sending a single blank) suffices to let the transfer fail. >>>>>> my >>>>>> understanding is that fossil _does_ send something like `echo test' >>>>>> (is >>>>>> this true). all unexpected output to tty from the login scripts would >>>>>> come >>>>>> _before_ that so why not test for receiving the expected text ('test' >>>>>> just >>>>>> being not unique/strong enough) at the end of whatever is send (up to >>>>>> a >>>>>> reasonable length)? is this a stupid idea? >>>>>> >>>>>> >>>>> >>>>> I thought of trying that some time ago but never got around to it. >>>>> Inspired >>>>> by your comment I gave a similar approach a quick try and for the first >>>>> time I saw ssh work on my home linux box!!! >>>>> >>>>> All I did was read and discard any junk on the line before sending the >>>>> echo >>>>> test: >>>>> >>>>> http://www.kiatoa.com/cgi-bin/****fossils/fossil/fdiff?v1=**<http://www.kiatoa.com/cgi-bin/**fossils/fossil/fdiff?v1=**> >>>>> 935bc0a983135b26&v2=****61f9ddf1e2c8bbb0<http://www.** >>>>> kiatoa.com/cgi-bin/fossils/**fossil/fdiff?v1=**935bc0a983135b26&v2=** >>>>> 61f9ddf1e2c8bbb0<http://www.kiatoa.com/cgi-bin/fossils/fossil/fdiff?v1=935bc0a983135b26&v2=61f9ddf1e2c8bbb0> >>>>> > >>>>> >>>>> >>>>> ===========without========== >>>>> rm: cannot remove `*': No such file or directory >>>>> make: Nothing to be done for `all'. >>>>> ssh matt@xena >>>>> Pseudo-terminal will not be allocated because stdin is not a terminal. >>>>> ../fossil: ssh connection failed: [Welcome to Ubuntu 12.04.1 LTS >>>>> (GNU/Linux >>>>> 3.2.0-32-generic-pae i686) >>>>> >>>>> * Documentation: https://help.ubuntu.com/ >>>>> >>>>> 0 packages can be updated. >>>>> 0 updates are security updates. >>>>> >>>>> test] >>>>> >>>>> ==============with============****=== >>>>> >>>>> fossil/junk$ rm *;(cd ..;make) && ../fossil clone >>>>> ssh://matt@xena//home/matt/****fossils/fossil.fossil >>>>> >>>>> fossil.fossil >>>>> make: Nothing to be done for `all'. >>>>> ssh matt@xena >>>>> Pseudo-terminal will not be allocated because stdin is not a terminal. >>>>> Bytes Cards Artifacts Deltas >>>>> Sent: 53 1 0 0 >>>>> Received: 5004225 13950 1751 5238 >>>>> Sent: 71 2 0 0 >>>>> Received: 5032480 9827 1742 3132 >>>>> Sent: 57 93 0 0 >>>>> Received: 5012028 9872 1137 3806 >>>>> Sent: 57 1 0 0 >>>>> Received: 4388872 3053 360 1168 >>>>> Total network traffic: 1037 bytes sent, 19438477 bytes received >>>>> Rebuilding repository meta-data... >>>>> 100.0% complete... >>>>> project-id: CE59BB9F186226D80E49D1FA2DB29F****935CCA0333 >>>>> server-id: 3029a8494152737798f2768c799192****1f2342a84b >>>>> >>>>> admin-user: matt (password is "7db8e5") >>>>> >>>>> >>>>> >>>>> great. that's essentially what I had in mind (but your approach of >>>> sending two commands while flushing >>>> the first response completely probably is better, AFAICS). will >>>> something >>>> like this make it into a future release? >>>> >>>> joerg >>>> >>>> >>>> >>>>> >>>>>> >>>>>> >>>>>> >>>>>>> >>>>>>> On Sun, Nov 11, 2012 at 2:01 AM, Ramon Ribó <[email protected]> >>>>>>> wrote: >>>>>>> >>>>>>> >>>>>>> > Sshfs didn't fix the problems that I was having with fossil+ssh, >>>>>>> or >>>>>>> >>>>>>>> at >>>>>>>> least >>>>>>>> > only did so partially. >>>>>>>> >>>>>>>> Why not? In what sshfs failed to give you the equivalent >>>>>>>> functionality >>>>>>>> than a remote access to a fossil database through ssh? >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> 2012/11/11 Timothy Beyer <[email protected]> >>>>>>>> >>>>>>>> At Sat, 10 Nov 2012 22:31:57 +0100, >>>>>>>> >>>>>>>> j. van den hoff wrote: >>>>>>>>> > >>>>>>>>> > thanks for responding. >>>>>>>>> > I managed to solve my problem in the meantime (see my previous >>>>>>>>> mail in >>>>>>>>> > this thread), but I'll make a memo of sshfs and have a look at >>>>>>>>> it. >>>>>>>>> > >>>>>>>>> > joerg >>>>>>>>> > >>>>>>>>> >>>>>>>>> Sshfs didn't fix the problems that I was having with fossil+ssh, or >>>>>>>>> at >>>>>>>>> least only did so partially. Though, the problems that I was >>>>>>>>> having >>>>>>>>> with >>>>>>>>> ssh were different. >>>>>>>>> >>>>>>>>> What I'd recommend doing is tunneling http or https through ssh, >>>>>>>>> and >>>>>>>>> host >>>>>>>>> all of your fossil repositories on the host computer on your web >>>>>>>>> server >>>>>>>>> of >>>>>>>>> choice via cgi. I do that with lighttpd, and it works flawlessly. >>>>>>>>> >>>>>>>>> Tim >>>>>>>>> ______________________________******_________________ >>>>>>>>> fossil-users mailing list >>>>>>>>> [email protected].******org >>>>>>>>> <[email protected]** >>>>>>>>> scm.org >>>>>>>>> <[email protected]**scm.org<[email protected]> >>>>>>>>> >> >>>>>>>>> http://lists.fossil-scm.org:******8080/cgi-bin/mailman/** >>>>>>>>> listinfo/**** >>>>>>>>> fossil-users<http://lists.**fo**ssil-scm.org:8080/cgi-bin/**<http://fossil-scm.org:8080/cgi-bin/**> >>>>>>>>> mailman/listinfo/fossil-users<**http://lists.fossil-scm.org:** >>>>>>>>> 8080/cgi-bin/mailman/listinfo/**fossil-users<http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users> >>>>>>>>> > >>>>>>>>> > >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> ______________________________******_________________ >>>>>>>> fossil-users mailing list >>>>>>>> [email protected].******org <[email protected] >>>>>>>> ** >>>>>>>> scm.org >>>>>>>> <[email protected]**scm.org<[email protected]> >>>>>>>> >> >>>>>>>> http://lists.fossil-scm.org:******8080/cgi-bin/mailman/** >>>>>>>> listinfo/**** >>>>>>>> fossil-users<http://lists.**fo**ssil-scm.org:8080/cgi-bin/**<http://fossil-scm.org:8080/cgi-bin/**> >>>>>>>> mailman/listinfo/fossil-users<**http://lists.fossil-scm.org:** >>>>>>>> 8080/cgi-bin/mailman/listinfo/**fossil-users<http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users> >>>>>>>> > >>>>>>>> > >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>> Using Opera's revolutionary email client: http://www.opera.com/mail/ >>>>>> ______________________________******_________________ >>>>>> fossil-users mailing list >>>>>> [email protected].******org <[email protected]** >>>>>> scm.org >>>>>> <[email protected]**scm.org<[email protected]> >>>>>> >> >>>>>> http://lists.fossil-scm.org:******8080/cgi-bin/mailman/**listinfo/** >>>>>> **fossil-users<http://lists.****fossil-scm.org:8080/cgi-bin/**<http://fossil-scm.org:8080/cgi-bin/**> >>>>>> mailman/listinfo/fossil-users<**http://lists.fossil-scm.org:** >>>>>> 8080/cgi-bin/mailman/listinfo/**fossil-users<http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users> >>>>>> > >>>>>> > >>>>>> >>>>>> >>>>>> >>>> -- >>>> Using Opera's revolutionary email client: http://www.opera.com/mail/ >>>> ______________________________****_________________ >>>> fossil-users mailing list >>>> [email protected].****org <[email protected]** >>>> scm.org <[email protected]>> >>>> http://lists.fossil-scm.org:****8080/cgi-bin/mailman/listinfo/**** >>>> fossil-users<http://lists.**fossil-scm.org:8080/cgi-bin/** >>>> mailman/listinfo/fossil-users<http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users> >>>> > >>>> >>>> >>> >>> ______________________________**_________________ >>> fossil-users mailing list >>> [email protected].**org <[email protected]> >>> http://lists.fossil-scm.org:**8080/cgi-bin/mailman/listinfo/** >>> fossil-users<http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users> >>> >>> >>> >> >> > > -- > Using Opera's revolutionary email client: http://www.opera.com/mail/ > ______________________________**_________________ > fossil-users mailing list > [email protected].**org <[email protected]> > http://lists.fossil-scm.org:**8080/cgi-bin/mailman/listinfo/**fossil-users<http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users> > -- D. Richard Hipp [email protected]
_______________________________________________ fossil-users mailing list [email protected] http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

