On Thu, Mar 29, 2012 at 7:38 PM, Peter Eisentraut pete...@gmx.net wrote:
On tis, 2012-03-27 at 00:53 +0100, Greg Stark wrote:
Hm. So my original plan was dependent on adding the state-merge
function we've talked about in the past. Not all aggregate functions
necessarily can support such a
On fre, 2012-03-30 at 22:59 -0700, Hitoshi Harada wrote:
I don't know exactly how PL/Proxy or pgpool accomplish the multi-phase
aggregate,
They don't.
but in theory the proposal above is state-merge function,
so it doesn't apply to general aggregate results that passed through
the final
On tis, 2012-03-27 at 00:53 +0100, Greg Stark wrote:
Hm. So my original plan was dependent on adding the state-merge
function we've talked about in the past. Not all aggregate functions
necessarily can support such a function but I think all or nearly all
the builtin aggregates can. Certainly
On Mon, Mar 26, 2012 at 5:11 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Greg Stark st...@mit.edu writes:
On Mon, Mar 26, 2012 at 6:15 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Could you give us a brain dump on the sketch? I've never seen how to
do it without unreasonable overhead.
Hm. So my
On Tue, Mar 27, 2012 at 3:22 AM, Hitoshi Harada umi.tan...@gmail.com wrote:
According to what I've learned in the last couple of months, array_agg
is not ready for fallback ways like dumping to tuplestore. Even
merge-state is not able for them. The problem is that the executor
doesn't know
On 03/26/2012 01:54 PM, Andrew Dunstan wrote:
On 03/26/2012 01:34 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
On 03/26/2012 01:06 PM, Heikki Linnakangas wrote:
Is it possible this job is inserting and then updating (or deleteing)
the row it just inserted and doing a
hello,
does the problem show up on 2% of all problems after 2 weeks or so?
we had a similar problem on UNIX as well. it even materialized on 100 identical
boxes (on 2% of them). it pops up randomly and never stops …
i checked some code paths. some of those messages are direct output via stderr
Andrew Dunstan and...@dunslane.net writes:
I'm really perplexed as to why this fairly simple query should cause an
out of memory error:
select loid, max(pageno) from ldata group by loid order by 2 desc
limit 10;
Looks like the group by/aggregate step is eating lots of memory:
On 03/26/2012 11:18 AM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
I'm really perplexed as to why this fairly simple query should cause an
out of memory error:
select loid, max(pageno) from ldata group by loid order by 2 desc
limit 10;
Looks like the group
Andrew Dunstan and...@dunslane.net writes:
On 03/26/2012 11:18 AM, Tom Lane wrote:
Could we see EXPLAIN output for this query?
Currently it shows:
Limit (cost=19443025.87..19443025.89 rows=10 width=8
- Sort (cost=19443025.87..19446451.29 rows=1370168 width=8)
Sort Key:
On Mon, Mar 26, 2012 at 4:03 PM, Andrew Dunstan and...@dunslane.net wrote:
TupleHashTable: 619175960 total in 95 blocks; 821528 free
(331 chunks); 618354432 used
I think the plan you showed isn't the plan that's running out of
memory. I think it's running out of memory because
On 03/26/2012 12:11 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
On 03/26/2012 11:18 AM, Tom Lane wrote:
Could we see EXPLAIN output for this query?
Currently it shows:
Limit (cost=19443025.87..19443025.89 rows=10 width=8
- Sort (cost=19443025.87..19446451.29
On 03/26/2012 12:20 PM, Greg Stark wrote:
On Mon, Mar 26, 2012 at 4:03 PM, Andrew Dunstanand...@dunslane.net wrote:
TupleHashTable: 619175960 total in 95 blocks; 821528 free
(331 chunks); 618354432 used
I think the plan you showed isn't the plan that's running out of
Andrew Dunstan and...@dunslane.net writes:
On 03/26/2012 12:11 PM, Tom Lane wrote:
That plan should not create a tuple hash table, so I think it's almost
certain that the plan changed. It might be interesting to remove the
pg_statistic rows for the table and then see what plan you get.
On Mon, Mar 26, 2012 at 5:41 PM, Andrew Dunstan and...@dunslane.net wrote:
Combo CIDs: 755490840 total in 100 blocks; 5161072 free (381
chunks); 750329768 used
I think you'll have to catch Heikki's attention to get a good answer to this.
Is it possible this job is inserting and then
On Mon, Mar 26, 2012 at 5:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Hm. This illustrates that it's not too prudent to rely on a default
numdistinct estimate to decide that a hash aggregation is safe :-(.
We had probably better tweak the cost estimation rules to not trust
that. Maybe, if we
On 26.03.2012 19:51, Greg Stark wrote:
On Mon, Mar 26, 2012 at 5:41 PM, Andrew Dunstanand...@dunslane.net wrote:
Combo CIDs: 755490840 total in 100 blocks; 5161072 free (381
chunks); 750329768 used
I think you'll have to catch Heikki's attention to get a good answer to this.
Is
On Mon, Mar 26, 2012 at 5:43 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Hm. This illustrates that it's not too prudent to rely on a default
numdistinct estimate to decide that a hash aggregation is safe :-(.
We had probably better tweak the cost estimation rules to not trust
that. Maybe, if we
On 26.03.2012 19:59, Heikki Linnakangas wrote:
On 26.03.2012 19:51, Greg Stark wrote:
On Mon, Mar 26, 2012 at 5:41 PM, Andrew Dunstanand...@dunslane.net
wrote:
Combo CIDs: 755490840 total in 100 blocks; 5161072 free (381
chunks); 750329768 used
I think you'll have to catch Heikki's attention
Greg Stark st...@mit.edu writes:
I have a sketch for how to handle spilling hash aggregates to disk in
my head. I'm not sure if it's worth the amount of complexity it would
require but I'll poke around a bit and see if it works out well.
It'd be awfully nice if those could spill to disk. I
On 03/26/2012 01:06 PM, Heikki Linnakangas wrote:
On 26.03.2012 19:59, Heikki Linnakangas wrote:
On 26.03.2012 19:51, Greg Stark wrote:
On Mon, Mar 26, 2012 at 5:41 PM, Andrew Dunstanand...@dunslane.net
wrote:
Combo CIDs: 755490840 total in 100 blocks; 5161072 free (381
chunks); 750329768
Andrew Dunstan and...@dunslane.net writes:
On 03/26/2012 01:06 PM, Heikki Linnakangas wrote:
Is it possible this job is inserting and then updating (or deleteing)
the row it just inserted and doing a large number of such
insert/update operations all within the same transaction? Or perhaps
On 03/26/2012 01:34 PM, Tom Lane wrote:
Andrew Dunstanand...@dunslane.net writes:
On 03/26/2012 01:06 PM, Heikki Linnakangas wrote:
Is it possible this job is inserting and then updating (or deleteing)
the row it just inserted and doing a large number of such
insert/update operations all
Andrew Dunstan and...@dunslane.net writes:
On 03/26/2012 01:34 PM, Tom Lane wrote:
Hm. The test case is just a straight pg_restore of lots and lots of LOs?
What pg_dump version was the dump made with?
8.4.8, same as the target. We get the same issue whether we restore
direct to the
On Mon, Mar 26, 2012 at 6:15 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Greg Stark st...@mit.edu writes:
I have a sketch for how to handle spilling hash aggregates to disk in
my head. I'm not sure if it's worth the amount of complexity it would
require but I'll poke around a bit and see if it
Greg Stark st...@mit.edu writes:
On Mon, Mar 26, 2012 at 6:15 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Could you give us a brain dump on the sketch? I've never seen how to
do it without unreasonable overhead.
Hm. So my original plan was dependent on adding the state-merge
function we've
26 matches
Mail list logo