On Wed Nov 24 19:56 1999 +1100, Darren Reed wrote:
> No, that doesn't. It means that we don't specify the message format as a
> part of the protocol. IMHO, what you're referring to (above) is the message
> format and protocol, combined into one. I think the two are separable. It
I agree. This is actually a good idea anyway, because if we design
the protocol with the capability to transport an arbitrary payload, it
will be both more generally useful and more likely to retain its
usefulness as logging needs change over time.
> The problem I have with this is that it requires multiple signatories
> (one for each host) as you can't change the contents of the signed message
> as it passed without invalidating the original signature. This isn't a
> TLS/IPsec problem (as you dicussed in [6]). If A sends the message to
> a syslog server, it must sign it independant of any TLS/IPsec gunk so that
> the application at the other end receives something which it can easily
> use for later verification that A sent it. Trusting the integrity of each
> message to TLS/IPsec is not enough as that protection is lost when the
> message is received by the application.
I think we need to nail down exactly what we're trying to accomplish
here. There are a few interrelated issues:
1) Tamper-proofing on the wire
2) Verifying that a message originated from a given source
3) Verifying the intermediate hop/timestamp information
4) Verifying a single "message thread" (e.g., all messages from a
single daemon process)
If we're only addressing issue (1), it sounds like TLS/IPSec should be
sufficient. If we're trying to solve issue (2), we'd really need to
have the process which submits the message generate the signature,
because otherwise any user on the system would be able to fake a
message from the daemon whose logs you're looking at.
If the protocol is designed to transport an arbitrary payload, one
approach for issue (3) is to have the intermediate syslog server
simply wrap the original message (signature intact) in its own signed
message. The final destination would then recursively unwrap the
message until it gets to the original message. The obvious problem,
of course, is that the transmitted message winds up growing quite a
bit at each hop.
Issue (4) basicly boils down to a lightweight version of issue (2).
Instead of generating permanent public/private key pairs for each
application, you can simply have syslogd generate a small key when the
equivalent of openlog() is called, and from then on everything
associated with that process must be signed with the private key.
(You could even do things like call openlog() before you fork() child
processes if you wanted to have several processes using the same
stream. But I guess I'm getting into implementation details here...)
On Wed Nov 24 20:26 1999 +1100, Darren Reed wrote:
> I think we need to do what we can, at a protocol level, to support whatever
> one needs to do at an application level, in order to facilitate confirmed
> delivery. If we can provide host to host confirmed delivery (and storage?)
> then the applications exchanging messages may be able to provide feedback
> to other programs about the success/failure of their logging.
[...]
> Yup. What can we do about the different steps in transmission to do this?
> The host-host transmission problem is easy. The initial process-syslogd
> transmission, I'm not sure about (it's usually host-local IPC). Arguably,
> since unix domain sockets use the same protocol, it is within out charter,
> but I'm not sure if it makes sense unless the application did something
> like ask the syslogd for an initial seeding key (or does it generate its
> own temporary one and pass that to syslogd ?).
[...]
> That's taking a particularly unix-centric view of the situation. I may
> have a number of network devices which are old and don't do the new syslog
> thing but I still need to provide assurance about what is received from
> them. Forget about openlog/logger, and other implementation issues.
I agree with all of this, but I think we need to approach this problem
from a slightly different angle. We're currently trying to treat
messages differently depending on whether they come from a local
process, network device, or remote syslogd. Instead, let's reduce all
of these specific examples to the general case of "sender/recipient"
communication, and design our protocol around that.
For example, one difference between the types of log sources
enumerated above is in how they handle confirmed delivery. When one
syslogd is sending to another, the window size (number of messages
sent before an ACK is returned) can be quite large, because the sender
has a non-volatile queue. However, when a local process or network
device submits a message, it will usually need the receiver to write
the message to non-volatile storage and send an ACK after each
message. If the protocol's handshaking negotiates a window size
that's acceptable to both the client and server, then we can handle
both types of communication in essentially the same way.
Another gratuitous difference between the log sources we've been
considering is that local processes communicate with syslogd using an
implementation-dependant local IPC mechanism (most of which are like
UDP in that they do not offer reliable delivery), rather than a TCP
socket. AIX has the traditional BSD-style Unix-domain SOCK_DGRAM
socket; HP-UX has a named pipe; Solaris uses a STREAMS driver; and
Linux just recently switched their Unix-domain socket from SOCK_STREAM
to SOCK_DGRAM. Why not simply have local processes use TCP or UDP
sockets just like remote syslog daemons? This would simplify things
by reducing the number of transport mechanisms that the daemon needs
to handle. It might even be possible to avoid running a local syslogd
altogether by having all processes contact a remote syslogd directly.
In terms of the message signature, how we handle it depends largely on
what problem we're really trying to solve, as I mentioned above.
--
Mark D. Roth <[EMAIL PROTECTED]>
http://www.feep.net/~roth/