On Tue, Nov 06, 2001 at 12:37:05PM +0300, Alexander V. Lukyanov wrote:
> Maybe. But I would prefer status line updates to be atomic, so that
> other tty output would not break it in half. No big difference, actually.

Hmm, that's true.

> BTW, other tty output in lftp is blocking also. I remember that
> making tty non-blocking can cause funny side effects in other programs.
> So, e.g. `cat' command can also block other transfers.

Well, the difference is that most operations aren't around for a long
time; typically a cat runs and then finishes (unless it's something
silly like catting a pipe.)  The status line stays there for long
periods of time.

> I think Buffer is an appropriate base class for CopyPeer. I know that
> the copying thing is not simple, but the buffering is needed anyway.

Yeah, buffering is needed--I'm just suggesting that a member would be
better, with virtual accessors, so CopyPeers can be implemented that put
the buffer elsewhere.  The CopyPeer stuff is complicated, though, so I
wouldn't do it without reason.  Too much to break, at least for me
(since I don't know what all of the cases are for.)

> Is global buffering is really needed? Maybe it is better to have a separate
> buffer in each job, like it is done now.

Well, it could put buffer handling in one place (instead of each job
doing different things with fds), it'd allow things like safe status
line buffering; we could even keep jobs running when we fork a shell,
buffering data until it exits.

As far as other programs, the tty should be blocking when it's given
to a subshell.  If we run "!vi foo", we can run a scheduling loop in the
parent.  If the subshell's backgrounded, it'll walk over the buffering,
but that happens anyway.

However, it'd be a major project: standardize output, have stdout,
stderr and maybe a separate status line output stream before commands
are run.  It'd be useful in the sense of "the download must go on", but
it's definitely not urgent.

> Some servers implement SITE UTIME, but it is not common. You can create
> empty files with `put -c /dev/null -o file', but it won't touch existing
> files.

FTP servers are in a sad state.  ProFTPD (probably the single best server
out there) doesn't even support FEAT!  (Maybe I'll see if they need help
on anything ... blah, whole thing's in plain C.  No reason not to use at
least basic C++ anymore in user space.  Oh well.)  I wish they supported
MLST/MLSD--that'd be a nice, simple, *fast* parser.  (Except the draft
doesn't give any standard way of supplying full *unix* permissions, only
platform-independant ones, so making ls-like output would be
impossible.)

I suppose in a sense it's chicken-and-egg; but in the case of FTP, it's
the servers that need to take the initiative for new features.

> I already treat ENFILE and EMFILE as non-fatal. Maybe ENOSPC should be
> treated similarly.

Yeah--optionally (defaulting on, to enforce lftp's "no errors are fatal"
claim.)  Some people wouldn't want this.

Some way of setting some errors as needing a new control connection
would be useful.  I've hit servers with bogus quotas in place: you can
download so much, then you just keep getting out-of-quota errors until
you reconnect.  It's done to give other people a chance to get in,
presumably--you disconnect and get to redial the server again.  (They
typically have connection queueing.) Unfortunately, lftp doesn't know
it's supposed to do that, and chews through the queue.  I'm not sure
of a clean way to do this; perhaps a regex matching all error codes?
Currently, I work around this by queueing "close" periodically.

-- 
Glenn Maynard

Reply via email to