Andrew Dunstan and...@dunslane.net writes:
On 04/02/2012 01:03 PM, Tom Lane wrote:
When I said list, I meant a List *. No fixed size.
Ok, like this?
I think this could use a bit of editorialization (I don't think the
stripe terminology is still applicable, in particular), but the
general
On 04/04/2012 12:13 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
On 04/02/2012 01:03 PM, Tom Lane wrote:
When I said list, I meant a List *. No fixed size.
Ok, like this?
I think this could use a bit of editorialization (I don't think the
stripe terminology is still
Andrew Dunstan and...@dunslane.net writes:
On 04/04/2012 12:13 PM, Tom Lane wrote:
Does anyone feel that it's a bad idea that list entries are never
reclaimed? In the worst case a transient load peak could result in
a long list that permanently adds search overhead. Not sure if it's
worth
I wrote:
The idea I had in mind was to compensate for adding list-removal logic
by getting rid of the concept of an unused entry. If the removal is
conditional then you can't do that and you end up with the complications
of both methods. Anyway I've not tried to code it yet.
I concluded
On 04/04/2012 03:09 PM, Tom Lane wrote:
I wrote:
The idea I had in mind was to compensate for adding list-removal logic
by getting rid of the concept of an unused entry. If the removal is
conditional then you can't do that and you end up with the complications
of both methods. Anyway I've
On 04/02/2012 01:03 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
On 04/02/2012 12:44 PM, Tom Lane wrote:
You could do something like having a list of pending chunks for each
value of (pid mod 256). The length of each such list ought to be plenty
short under ordinary
On 04/01/2012 06:34 PM, Andrew Dunstan wrote:
Some of my PostgreSQL Experts colleagues have been complaining to me
that servers under load with very large queries cause CSV log files
that are corrupted, because lines are apparently multiplexed. The log
chunking protocol between the errlog
Andrew Dunstan and...@dunslane.net writes:
On 04/01/2012 06:34 PM, Andrew Dunstan wrote:
Some of my PostgreSQL Experts colleagues have been complaining to me
that servers under load with very large queries cause CSV log files
that are corrupted,
We could just increase CHUNK_SLOTS in
On 04/02/2012 12:00 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
On 04/01/2012 06:34 PM, Andrew Dunstan wrote:
Some of my PostgreSQL Experts colleagues have been complaining to me
that servers under load with very large queries cause CSV log files
that are corrupted,
We
Andrew Dunstan and...@dunslane.net writes:
On 04/02/2012 12:00 PM, Tom Lane wrote:
This seems like it isn't actually fixing the problem, only pushing out
the onset of trouble a bit. Should we not replace the fixed-size array
with a dynamic data structure?
But maybe your're right. If we do
On 04/02/2012 12:44 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
On 04/02/2012 12:00 PM, Tom Lane wrote:
This seems like it isn't actually fixing the problem, only pushing out
the onset of trouble a bit. Should we not replace the fixed-size array
with a dynamic data
Andrew Dunstan and...@dunslane.net writes:
On 04/02/2012 12:44 PM, Tom Lane wrote:
You could do something like having a list of pending chunks for each
value of (pid mod 256). The length of each such list ought to be plenty
short under ordinary circumstances.
Yeah, ok, that should work. How
Some of my PostgreSQL Experts colleagues have been complaining to me
that servers under load with very large queries cause CSV log files that
are corrupted, because lines are apparently multiplexed. The log
chunking protocol between the errlog routines and the syslogger is
supposed to prevent
13 matches
Mail list logo