Hi there,
I work for VMware with our Postgres performance team. We recently came across a
dbt2 performance regression from 9.1.6 to 9.2.1. We have done some profiling
and don't see anything obvious. Would like to get some suggestions from the
community where to investigate further.
The average
On Tue, Oct 30, 2012 at 4:58 PM, Jeff Janes wrote:
> On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob wrote:
>
>> pgbouncer 1.4.2 installed from Ubuntu's packages on the same machine
>> as Postgres. Django connects via TCP/IP to pgbouncer (it does one
>> connection and one transaction per request)
On Tue, Oct 30, 2012 at 3:16 PM, Scott Marlowe wrote:
> On Tue, Oct 30, 2012 at 4:11 PM, Greg Williamson
> wrote:
>> Jeff / Catalin --
>>
>> Jeff Janes wrote:
>>
>>>On Mon, Oct 29, 2012 at 5:11 PM, Catalin Iacob
>>>wrote:
>>>
pgbouncer 1.4.2 installed from Ubuntu's packages on the same mac
Externalizing the limit has improved the speed a lot. Distinct is half a
second faster than group by.
http://explain.depesz.com/s/vP1
with tmp as (
select distinct tr.nr as tnr
, tr.time_end as tend
, c.id_board as cb
, c.id_board_mini as cbm
, ti.id_test_result as itr
from test_item ti
, test_r
> But why? Is there a way to force the planner into this?
I don't know enough about the planner to answer the "why",
but the root of the problem seems to be the mis-estimate
for the join between test_result and recipe_version
(1348 instead of 21983 rows).
That makes the planner think that a neste
On 10/31/2012 05:55 AM, Vincenzo Melandri wrote:
on People:
CREATE UNIQUE INDEX people_pkey ON people USING btree (key1, key2)
CREATE INDEX people_pkey_hash_loc ON people USING hash (key1);
CREATE INDEX people_pkey_hash_id ON people USING hash (key2);
I can't say why it would ignore the first
I may (or may not) have found the solution: a reindex on the 3 tables fixed
the query plan. Now I can plan to reindex only the involved indexes at the
start of the data import procedure.
On Wed, Oct 31, 2012 at 11:55 AM, Vincenzo Melandri
wrote:
> Hi all :)
>
> I'm here again.
> This time I'll pr
But why? Is there a way to force the planner into this?
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730151.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance m
Hi all :)
I'm here again.
This time I'll provide more details (explain analyze, data-type, and
indexes), hope it will be enough :)
The query that is performing a plan that i do not understand is the
following:
select [some fields from all 3 tables]
from
DATA_SEQUENCES
join SU
This was answered on the list last time you asked it. You are exceeding a
maximum buffer size. There was an implication that it was related to
converting a string from one encoding to another that could maybe be
alleviated by using the same encoding in both client and server, but a more
reliable so
Hello
2012/10/31 Mahavir Trivedi :
> dear friends
>
> i have - sql file of size more than 1 gb
> when i execute it then after some time "Invalid memory alloc request size
> 100234023 byte" occcured
> what ' s problem that i don't know ?
there is hard-coded limit for memory request - for exampl
dear friends
i have - sql file of size more than 1 gb
when i execute it then after some time "Invalid memory alloc request size
100234023 byte" occcured
what ' s problem that i don't know ?
with thanks
mahavir
Much better...
http://explain.depesz.com/s/uFi
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Slow-query-where-am-I-going-wrong-tp5730015p5730145.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.
--
Sent via pgsql-performance mailing l
AndyG wrote:
>> Increasing the statistics for test_result.id_recipe_version
>> had no effect?
> I increased the statistics in steps up to 5000 (with vacuum analyse) -
Seems
> to be as good as it gets.
>
> http://explain.depesz.com/s/z2a
Just out of curiosity, do you get a better plan with
enable
Albe Laurenz *EXTERN* wrote
> Increasing the statistics for test_result.id_recipe_version
> had no effect?
>
>> I am going to normalize the table some more before partitioning.
>
> How do you think that partitioning will help?
I increased the statistics in steps up to 5000 (with vacuum analyse)
15 matches
Mail list logo