On fre, 2012-03-30 at 22:59 -0700, Hitoshi Harada wrote:
> I don't know exactly how PL/Proxy or pgpool accomplish the multi-phase
> aggregate,
They don't.
> but in theory the proposal above is state-merge function,
> so it doesn't apply to general aggregate results that passed through
> the final
On Thu, Mar 29, 2012 at 7:38 PM, Peter Eisentraut wrote:
> On tis, 2012-03-27 at 00:53 +0100, Greg Stark wrote:
>> Hm. So my original plan was dependent on adding the state-merge
>> function we've talked about in the past. Not all aggregate functions
>> necessarily can support such a function but
On tis, 2012-03-27 at 00:53 +0100, Greg Stark wrote:
> Hm. So my original plan was dependent on adding the state-merge
> function we've talked about in the past. Not all aggregate functions
> necessarily can support such a function but I think all or nearly all
> the builtin aggregates can. Certain
On 03/26/2012 01:54 PM, Andrew Dunstan wrote:
On 03/26/2012 01:34 PM, Tom Lane wrote:
Andrew Dunstan writes:
On 03/26/2012 01:06 PM, Heikki Linnakangas wrote:
Is it possible this job is inserting and then updating (or deleteing)
the row it just inserted and doing a large number of such
in
On Tue, Mar 27, 2012 at 3:22 AM, Hitoshi Harada wrote:
> According to what I've learned in the last couple of months, array_agg
> is not ready for fallback ways like dumping to tuplestore. Even
> merge-state is not able for them. The problem is that the executor
> doesn't know how to serialize/d
On Mon, Mar 26, 2012 at 5:11 PM, Tom Lane wrote:
> Greg Stark writes:
>> On Mon, Mar 26, 2012 at 6:15 PM, Tom Lane wrote:
>>> Could you give us a brain dump on the sketch? I've never seen how to
>>> do it without unreasonable overhead.
>
>> Hm. So my original plan was dependent on adding the st
Greg Stark writes:
> On Mon, Mar 26, 2012 at 6:15 PM, Tom Lane wrote:
>> Could you give us a brain dump on the sketch? I've never seen how to
>> do it without unreasonable overhead.
> Hm. So my original plan was dependent on adding the state-merge
> function we've talked about in the past. Not
On Mon, Mar 26, 2012 at 6:15 PM, Tom Lane wrote:
> Greg Stark writes:
>> I have a sketch for how to handle spilling hash aggregates to disk in
>> my head. I'm not sure if it's worth the amount of complexity it would
>> require but I'll poke around a bit and see if it works out well.
>
> It'd be a
Andrew Dunstan writes:
> On 03/26/2012 01:34 PM, Tom Lane wrote:
>> Hm. The test case is just a straight pg_restore of lots and lots of LOs?
>> What pg_dump version was the dump made with?
> 8.4.8, same as the target. We get the same issue whether we restore
> direct to the database from pg_res
On 03/26/2012 01:34 PM, Tom Lane wrote:
Andrew Dunstan writes:
On 03/26/2012 01:06 PM, Heikki Linnakangas wrote:
Is it possible this job is inserting and then updating (or deleteing)
the row it just inserted and doing a large number of such
insert/update operations all within the same transa
Andrew Dunstan writes:
> On 03/26/2012 01:06 PM, Heikki Linnakangas wrote:
>>> Is it possible this job is inserting and then updating (or deleteing)
>>> the row it just inserted and doing a large number of such
>>> insert/update operations all within the same transaction? Or perhaps
>>> it's updat
On 03/26/2012 01:06 PM, Heikki Linnakangas wrote:
On 26.03.2012 19:59, Heikki Linnakangas wrote:
On 26.03.2012 19:51, Greg Stark wrote:
On Mon, Mar 26, 2012 at 5:41 PM, Andrew Dunstan
wrote:
Combo CIDs: 755490840 total in 100 blocks; 5161072 free (381
chunks); 750329768 used
I think you'll
Greg Stark writes:
> I have a sketch for how to handle spilling hash aggregates to disk in
> my head. I'm not sure if it's worth the amount of complexity it would
> require but I'll poke around a bit and see if it works out well.
It'd be awfully nice if those could spill to disk. I think that
cu
On 26.03.2012 19:59, Heikki Linnakangas wrote:
On 26.03.2012 19:51, Greg Stark wrote:
On Mon, Mar 26, 2012 at 5:41 PM, Andrew Dunstan
wrote:
Combo CIDs: 755490840 total in 100 blocks; 5161072 free (381
chunks); 750329768 used
I think you'll have to catch Heikki's attention to get a good answe
On Mon, Mar 26, 2012 at 5:43 PM, Tom Lane wrote:
> Hm. This illustrates that it's not too prudent to rely on a default
> numdistinct estimate to decide that a hash aggregation is safe :-(.
> We had probably better tweak the cost estimation rules to not trust
> that. Maybe, if we have a default e
On 26.03.2012 19:51, Greg Stark wrote:
On Mon, Mar 26, 2012 at 5:41 PM, Andrew Dunstan wrote:
Combo CIDs: 755490840 total in 100 blocks; 5161072 free (381
chunks); 750329768 used
I think you'll have to catch Heikki's attention to get a good answer to this.
Is it possible this job
On Mon, Mar 26, 2012 at 5:43 PM, Tom Lane wrote:
> Hm. This illustrates that it's not too prudent to rely on a default
> numdistinct estimate to decide that a hash aggregation is safe :-(.
> We had probably better tweak the cost estimation rules to not trust
> that. Maybe, if we have a default
On Mon, Mar 26, 2012 at 5:41 PM, Andrew Dunstan wrote:
> Combo CIDs: 755490840 total in 100 blocks; 5161072 free (381
> chunks); 750329768 used
I think you'll have to catch Heikki's attention to get a good answer to this.
Is it possible this job is inserting and then updating (or delete
Andrew Dunstan writes:
> On 03/26/2012 12:11 PM, Tom Lane wrote:
>> That plan should not create a tuple hash table, so I think it's almost
>> certain that the plan changed. It might be interesting to remove the
>> pg_statistic rows for the table and then see what plan you get.
> Yeah, that gets
On 03/26/2012 12:20 PM, Greg Stark wrote:
On Mon, Mar 26, 2012 at 4:03 PM, Andrew Dunstan wrote:
TupleHashTable: 619175960 total in 95 blocks; 821528 free
(331 chunks); 618354432 used
I think the plan you showed isn't the plan that's running out of
memory. I think it's runni
On 03/26/2012 12:11 PM, Tom Lane wrote:
Andrew Dunstan writes:
On 03/26/2012 11:18 AM, Tom Lane wrote:
Could we see EXPLAIN output for this query?
Currently it shows:
Limit (cost=19443025.87..19443025.89 rows=10 width=8
-> Sort (cost=19443025.87..19446451.29 rows=1370168 width=8)
On Mon, Mar 26, 2012 at 4:03 PM, Andrew Dunstan wrote:
> TupleHashTable: 619175960 total in 95 blocks; 821528 free
> (331 chunks); 618354432 used
I think the plan you showed isn't the plan that's running out of
memory. I think it's running out of memory because it's using a Hash
Ag
Andrew Dunstan writes:
> On 03/26/2012 11:18 AM, Tom Lane wrote:
>> Could we see EXPLAIN output for this query?
> Currently it shows:
> Limit (cost=19443025.87..19443025.89 rows=10 width=8
>-> Sort (cost=19443025.87..19446451.29 rows=1370168 width=8)
> Sort Key: (max(pageno))
>
On 03/26/2012 11:18 AM, Tom Lane wrote:
Andrew Dunstan writes:
I'm really perplexed as to why this fairly simple query should cause an
out of memory error:
select loid, max(pageno) from ldata group by loid order by 2 desc
limit 10;
Looks like the group by/aggregate step is eating l
Andrew Dunstan writes:
> I'm really perplexed as to why this fairly simple query should cause an
> out of memory error:
> select loid, max(pageno) from ldata group by loid order by 2 desc
> limit 10;
Looks like the group by/aggregate step is eating lots of memory:
> AggCont
hello,
does the problem show up on 2% of all problems after 2 weeks or so?
we had a similar problem on UNIX as well. it even materialized on 100 identical
boxes (on 2% of them). it pops up randomly and never stops …
i checked some code paths. some of those messages are direct output via stderr
(
I'm not sure if this is a bug, but I have wrestling with this problem
for a client.
Platform is Windows Servers 2003 64 bit, PostgreSQL 8.4.8., 4Gb RAM,
running on an Amazon VM.
Shared buffers: 512Mb, work_mem: 25Mb. There are only a handful of
connections to the database, and no other act
27 matches
Mail list logo