> > Look at it this way.  Truncated output from some query is the worst
> > case -- it is a mysterious "sometimes you get scewed" behaviour.
> > By default, I want to get the maximum.
> 
> It all depends on the protocol / applications used, IMHO.  What about
> servers that expect the client to close their writing side of the
> socket before processing a query?  They'd use a timeout so that they
> don't get hung, close the socket and netcat would have received zero
> bytes.  You get the minimum.  Perhaps is that better than a random amount
> of data, but who said using netcat to reliably get data was a good idea
> in the first place?

Doesn't your second point pretty much answer the first?

Netcat is a diagnostic tool.  Period.  If you're using it in a script
to do something funky, then you should know what servers you are
communicating with, and accept that the whole thing is a kludge anyway.

Using netcat to reliably get data is never going to be appropriate
for a production system, IMHO.

> It's sad, because I can remember when I was using Debian,
> I was shown how to transfer data using only nc, on a LAN.

Exactly - it's useful on a LAN where you're in control of both client
and server in a closed environment, but if people are using netcat in
a script as a quick and nasty way to write clients for internet based
services, that is not something we should advocate, because it's
wrong.

-- 
Creamy

Reply via email to