On 01/22/2018 08:52 PM, Robert Haas wrote:
> On Sat, Jan 20, 2018 at 7:51 AM, Magnus Hagander <mag...@hagander.net> wrote:
>> Finally found myself back at this one, because I still think this is a
>> problem we definitely need to adress (whether with this file or not).
>>
>> The funneling into a single process is definitely an issue.
>>
>> But we don't really solve that problem today wit logging to stderr, do we?
>> Because somebody has to pick up the log as it came from stderr. Yes, you get
>> more overhead when sending the log to devnull, but that isn't really a
>> realistic scenario. The question is what to do when you actually want to
>> collect that much logging that quickly.
> 
> I think it depends on where the bottleneck is. If you're limited by 
> the speed at which a single process can write, shutting the logging 
> collector off and letting everyone write fixes it, because now you
> can bring the CPU cycles of many processes to bear rather than just
> one. If you're limited by the rate at which you can lay the file down
> on disk, then turning off the logging collector doesn't help, but I
> don't think that's the main problem. Now, of course, if you're
> writing the file to disk faster than a single process could do all
> those writes, then you're probably also going to need multiple
> processes to keep up with reading it, parsing it, etc. But that's not
> a problem for PostgreSQL core unless we decide to start shipping an
> in-core log analyzer.
> 

Sorry for the naive question, but which of these bottlenecks are we
actually hitting? I don't recall dealing with an actual production
system where the log collector would be an issue, so I don't have a very
good idea where the actual bottleneck is in this case.

I find it hard to believe the collector would be limited by I/O when
writing the data to disk (buffered sequential writes and all that).

So I guess it's more about the process of collecting the data from all
the processes through pipe, with the PIPE_MAX_PAYLOAD dance, etc.

I plan to do some experiments with log_statements=all, but my guess is
that the main issue is in transferring individual messages (which may be
further split into multiple chunks). What if we instead send the log
messages in larger batches? Of course, that would require alternative
transport mechanism (say, through shared memory) and the collector would
have to merge to maintain ordering. And then there's the issue that the
collector is pretty critical component.

FWIW I'm pretty sure we're not the only project facing the need to
collect high volume of logs from many processes, so how do the other
projects handle this issue?

>>
>> If each backend could actually log to *its own file*, then things would get
>> sped up. But we can't do that today. Unless you use the hooks and build it
>> yourself.
> 
> That seems like a useful thing to support in core.
> 

Yeah, I agree with that.

>> Per the thread referenced, using the hooks to handle the
>> very-high-rate-logging case seems to be the conclusion. But is that still
>> the conclusion, or do we feel we need to also have a native solution?
>>
>> And if the conclusion is that hooks is the way to go for that, then is the
>> slowdown of this patch actually a relevant problem to it?
> 
> I think that if we commit what you've proposed, we're making it harder
> for people who have a high volume of logging but are not currently
> using hooks.  I think we should try really hard to avoid the situation
> where our suggested workaround for a server change is "go write some C
> code and maybe you can get back to the performance you had with
> release N-1".  That's just not friendly.
> 
> I wonder if it would be feasible to set things up so that the logging
> collector was always started, but whether or not backends used it or
> wrote directly to their original stderr was configurable (e.g. dup
> stderr elsewhere, then dup whichever output source is currently
> selected onto stderr, then dup the other one if the config is changed
> later).
> 

I think the hook system is a really powerful tool, but it seems a bit
awkward to force people to use it to improve performance like this ...
That seems like something the core should to out of the box.

regards

-- 
Tomas Vondra                  http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

Reply via email to