On Tue, 19 Oct 1999, Enrique A. Chaparro wrote:
> This is a _long_ message. Please forgive me.
> Regarding the messages sent by Roger Marquis, Carson Gaspar
> and Chris Calabrese:
Hey, this is more convenient of a reply victim than replying to the
originals! ;-)
(Sorry, dont have much track of who said what because of that.)
> > Also the lack of timeZONE info in the timestamp is a common gripe. It
> > might be worthwhile to add a short field with this info i.e:
> > 991019_12:54:02_GMT+8.
Question? What's wrong with sending in UTC, and munging into whatever on
the recieving end? (Anything that makes a payload going over the network
longer without a rather good reason is -evil-)
> "Philosophical" issue: time could be seen as a reference to an abstract
> "watch" (something we usually do), or as a reference to an event E1
> occuring
> _after_ a certain event E0 and _before_ another certain event E2. E0
> and
> E2 can be as close as necessary to guarantee accuracy for E1. In this
> case,
<snip>
Doesnt that require E2 to actually have -happened-? (Which in turn can be
a problem if E2 never happens.. (host dies, whatever))
> > * Only the logging component of the system's Trusted Computing Base
> > can receive logs (e.g., only root can open /dev/log [...snip...]
> Partial agreement. In large, distributed implementations with some
> "log collection" capability, logs should be hidden from root's eyes.
Root is root is root is.. (short for 'How to do that, you say?')
> > * In machine-to-machine log transfers, both the source and
> > destination logging processes (not just the machines) must be
> > authenticated to each other [...snip...]
> Agreed.
'Must' might be a bit excessive.. Negotiated security is a better idea,
imho. (See SOCKS5, for instance.. RFC 1928, 1929)
> > * Logs have reliable timestamps. [...snip...]
> Agreed. Reliability does not mean accuracy. See the discussion on AA
> above.
On a local basis, this is a bit hard (after all, there's no way a system
can be absolutely sure the time is what the time says it is..) - you mean
'reliable' as in 'written down both as sent, recieved, hops, and whatnot,
so hopefully some of the timestamps can be trusted', I take it?
> > * The system must be able to guarantee that logs are never "lost in
> > the ether" [...snip...]
> Agreed.
'guarantee'.. I see a scenario of a log server going down, all hosts
logging to it writing down a gazillion lines of logs, and ponding the
server to heck when it comes up again. The implementations -could- support
it, but -must-..?
> > * Programs can use existing logging API's (syslog(), logger, NT
> > logging APIs?), though new API's could be added to get full
> > features.[...snip...]
> Perhaps desirable. If a tradeoff exists, I won�t sacrifice functionality
> to prior-API compatibility.
This too is more of an implementation issue.. a server can always 'fake
up' a lot of data from an 'old' protocol to cover all the MUSTs of the new
one. After all, most of the old ones has 'enough' info to be useful, and
if hashing and whatnot is implemented on a proxy level rather than the
host one, it's still there..
> > * The protocols must be simple enough to build support into
> > firewalls. [...snip...]
> Agreed. Simplicity is always a precious goal.
'built into firewalls'.. Is there -any- idea what so ever having
application specific support rather than letting data on port thisandthat
(1492, maybe? ;-) through various firewalls? Heck, let any semi arbritary
data through a firewall, and you can tunnel anything over it. VPN's over
ident(auth) is quite possible - veeeeery slow, but they work. To make a
long story short, I doubt there's any idea putting a lot of work into
making it 'firewall friendly', when packet filters will do the trick - or
wont. If they dont, application proxies wont help either.
> Enrique.
Kriss, again
--- .... --..-- -.-- --- ..- .-. . .- -.. -- --- .-. ... . --..-- . .... ..--..
Kriss Andsten <[EMAIL PROTECTED]> telnet slartibartfast.vogon.se 4243