Does postgres actually do multiple concurrent sorts within a single
backend?
Certainly. Consider for example a merge join with each input being
sorted by an explicit sort step. DISTINCT, ORDER BY, UNION, and
related operators require their own sort steps in the current
implementation. It'
Tom Lane <[EMAIL PROTECTED]> writes:
> Paul Tillotson <[EMAIL PROTECTED]> writes:
> > Does postgres actually do multiple concurrent sorts within a single
> > backend?
>
> Certainly. Consider for example a merge join with each input being
> sorted by an explicit sort step. DISTINCT, ORDER BY, U
Tom Lane wrote:
> Paul Tillotson <[EMAIL PROTECTED]> writes:
>> Does postgres actually do multiple concurrent sorts within a single
>> backend?
>
> Certainly. Consider for example a merge join with each input being
> sorted by an explicit sort step. DISTINCT, ORDER BY, UNION, and
> related opera
But isn't the problem when the planner screws up and not the sortmem setting?
There was my case where the 7.4 planner estimated 1500 distinct rows when
there were actually 1391110. On 7.3.4 it used about 4.4MB. Whereas 7.4
definitely used more than 400MB for the same query ) - I had to kill
post
On Tue, 07 Dec 2004 07:50:44 -0500, P.J. Josh Rovero
<[EMAIL PROTECTED]> wrote:
> There are many reports of kernel problems with memory allocation
> (too agressive) and swap issues with RHEL 3.0 on both RAID
> and non-RAID systems. I hope folks have worked through all
> those issues before blaming
There are many reports of kernel problems with memory allocation
(too agressive) and swap issues with RHEL 3.0 on both RAID
and non-RAID systems. I hope folks have worked through all
those issues before blaming postgresql.
Tom Lane wrote:
If I thought that a 200% error in memory usage were cause f
Neil Conway <[EMAIL PROTECTED]> writes:
> As a quick hack, what about throwing away the constructed hash table and
> switching to hashing for sorting if we exceed sort_mem by a significant
> factor? (say, 200%) We might also want to print a warning message to the
> logs.
If I thought that a 200% e
On Mon, 2004-12-06 at 23:55 -0500, Tom Lane wrote:
> Bear in mind that the price of honoring sort_mem carefully is
> considerably far from zero.
I'll do some thinking about disk-based spilling for hashed aggregation
for 8.1
> The issue with the hash code is that it sets size parameters on the
> b
Paul Tillotson <[EMAIL PROTECTED]> writes:
> Does postgres actually do multiple concurrent sorts within a single
> backend?
Certainly. Consider for example a merge join with each input being
sorted by an explicit sort step. DISTINCT, ORDER BY, UNION, and related
operators require their own sort
Neil Conway <[EMAIL PROTECTED]> writes:
> On Mon, 2004-12-06 at 22:19 -0300, Alvaro Herrera wrote:
>> AFAIK this is indeed the case with hashed aggregation, which uses the
>> sort_mem (work_mem) parameter to control its operation, but for which it
>> is not a hard limit.
> Hmmm -- I knew we didn't
Alvaro Herrera wrote:
On Tue, Dec 07, 2004 at 12:02:13PM +1100, Neil Conway wrote:
On Mon, 2004-12-06 at 19:37 -0500, Paul Tillotson wrote:
I seem to remember hearing that the memory limit on certain operations,
such as sorts, is not "enforced" (may the hackers correct me if I am
wrong);
On Mon, 2004-12-06 at 22:19 -0300, Alvaro Herrera wrote:
> AFAIK this is indeed the case with hashed aggregation, which uses the
> sort_mem (work_mem) parameter to control its operation, but for which it
> is not a hard limit.
Hmmm -- I knew we didn't implement disk-spilling for hashed aggregation
On Tue, Dec 07, 2004 at 12:02:13PM +1100, Neil Conway wrote:
> On Mon, 2004-12-06 at 19:37 -0500, Paul Tillotson wrote:
> > I seem to remember hearing that the memory limit on certain operations,
> > such as sorts, is not "enforced" (may the hackers correct me if I am
> > wrong); rather, the plan
On Mon, 2004-12-06 at 19:37 -0500, Paul Tillotson wrote:
> I seem to remember hearing that the memory limit on certain operations,
> such as sorts, is not "enforced" (may the hackers correct me if I am
> wrong); rather, the planner estimates how much a sort might take by
> looking at the statist
... under the periods of heavy swapping, one or more of the postgres
processes would be way up there (between 500MB and 1000MB (which would
easily explain the swapping)) ... the question is: why aren't all of the
processes sharing the same pool of shared memory since I thought that's what
I'm doin
David Esposito
> Cc: [EMAIL PROTECTED]
> Subject: Re: [GENERAL] Performance tuning on RedHat Enterprise Linux 3
>
> On Mon, Dec 06, 2004 at 09:08:02AM -0500, David Esposito wrote:
> > shared_buffers = 131072 (roughly 1GB)
> > max_fsm_relations = 1
> >
On Mon, Dec 06, 2004 at 09:08:02AM -0500, David Esposito wrote:
> According to Bruce Momjian's performance tuning guide, he recommends roughly
> half the amount of physical RAM for the shared_buffers ...
Does he? The guide I've seen from him AFAIR states that you should
allocate around 10% of ph
"David Esposito" <[EMAIL PROTECTED]> writes:
> New Box:
> shared_buffers = 131072 (roughly 1GB)
This setting is an order of magnitude too large. There is hardly any
evidence that it's worth setting shared_buffers much above 1.
regards, tom lane
On Mon, Dec 06, 2004 at 09:08:02AM -0500, David Esposito wrote:
> shared_buffers = 131072 (roughly 1GB)
> max_fsm_relations = 1
> max_fsm_pages = 1000
> sort_mem = 4096
> vacuum_mem = 262144
> Roughly 25 - 30 connections open (mos
Executive summary: We just did a cutover from a RedHat 8.0 box to a RedHat
Enterprise Linux 3 box and we're seeing a lot more swapping on the new box
than we ever did on the old box ... this is killing performance ...
Background:
Old Box:
RedHat 8.0
2GB Memory
Dual PIII 60
20 matches
Mail list logo