On Tue, Aug 17, 2010 at 12:00:01PM +0200, Alexander Staubo wrote: > > Indeed, but a local file is not compatible with chroot and above all, it > > would not permit to be completely asynchronous, meaning that the traffic > > would stall when performing writes. The real advantage of the local syslog > > is to run in a separate process ;-) > > I agree. There is another way, though. You could follow Varnish's > example, which in my opinion is an ideal compromise between > performance and lossiness, and opens up several extremely useful > opportunities for tracing. You may be familiar with it. > > Varnish logs binary entries to a circular buffer held in shared > memory. Each individual session event -- client connect, new header > line, backend connect, backend response, and so on -- is logged as a > separate log entry. Since it's just a memory write, it's incredibly > fast. > > The varnishlog tool opens the shared memory buffer and reads each > entry and displays it. This is useful to get a live stream of events, > but you can also run varnishlog as a daemon to log continuously to a > file, thereby achieving a similar process division as syslog. Like > syslog, Varnish will leak log records if the circular buffer is too > small, but unlike the syslog case you can actually fix the problem > easily by increasing the buffer size.
I did not know varnish worked like that. This is an idea I have not though about, even if I know for using it that it's what the syslogd in busybox does. What I wanted to do was to have a local large buffer with a task dedicated to sending logs. That would make sending logs easier in haproxy, including in TCP or unix-stream, but that would not solve the issue with the overhead causing high losses. The shared memory can indeed solve that, as well as the ability to maintain filters so that critical logs can always be delivered while traffic may get less priority. Inter-process synchronisation is not necessarily easy (polling...) but having an side-band inter-process unix-stream socket to transport message pointers depending on the filters the consumer would have subscribed to could result in a highly efficient process. This could also save complex sprintf() calls from the log producers. I'll keep this in mind for future evolutions. Cheers, Willy

