<inline tp>
----- Original Message -----
From: "Anton Okmianski (aokmians)" <[EMAIL PROTECTED]>
To: "David Harrington" <[EMAIL PROTECTED]>; "Rainer Gerhards"
<[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Tuesday, August 15, 2006 8:04 PM
Subject: RE: [Syslog] byte-counting vs special character


I second these concerns.  Escaping requirements result in a more
interdependent layering, which is a weaker architecture (not to mention
the overhead to a new standard). The syslog transport would need to mess
with payload instead of treating it as opaque blob with easily known
length. Not nice. Imagine TCP/IP escaping all payload to separate
datagrams and segments.

Escaping of magic characters is IMHO clearly a hack and should not be
put into a standard!

<tp>
Well, I think you just wrote off most of the IETF STANDARDs that deal with
character-based protocols (like the e-mail we are using to communicate(?)).

A set of characters, of symbols, in a 'message' is encoded, given a number, be
it 6 or 8 or 16 or whatever number of bits.  If that bit pattern conflicts with
the 'control' aspects of a protocol, then that bit pattern must be
'transfer-encoded' so that it does not appear per se on the wire.  That is what
base64 or quoted-printable do for e-mail.

So we are talking of using a well-understood, widely deployed piece of protocol
architecture to solve a common problem.

</tp>
<snip/>


_______________________________________________
Syslog mailing list
[email protected]
https://www1.ietf.org/mailman/listinfo/syslog

Reply via email to