On 04/02/2012 12:00 PM, Tom Lane wrote:
Andrew Dunstan<and...@dunslane.net> writes:
On 04/01/2012 06:34 PM, Andrew Dunstan wrote:
Some of my PostgreSQL Experts colleagues have been complaining to me
that servers under load with very large queries cause CSV log files
that are corrupted,
We could just increase CHUNK_SLOTS in syslogger.c, but I opted instead
to stripe the slots with a two dimensional array, so we didn't have to
search a larger number of slots for any given message. See the attached
patch.
This seems like it isn't actually fixing the problem, only pushing out
the onset of trouble a bit. Should we not replace the fixed-size array
with a dynamic data structure?
"A bit" = 10 to 20 times - more if we set CHUNK_STRIPES higher. :-)
But maybe your're right. If we do that and stick with my two-dimensional
scheme to keep the number of probes per chunk down, we'd need to reorg
the array every time we increased it. That might be a bit messy, but
might be ok. Or maybe linearly searching an array of several hundred
slots for our pid for every log chunk that comes in would be fast enough.
cheers
andrew
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers