2017-11-15 20:58 GMT+01:00 Gunther :
>
> On 11/15/2017 8:12, Pavel Stehule wrote:
>
> There is wrong plan due wrong estimation
>
> for this query you should to penalize nested loop
>
> set enable_nestloop to off;
>
> before evaluation of this query
>
>
> You are not the only one with this issue. M
On 11/15/2017 8:12, Pavel Stehule wrote:
There is wrong plan due wrong estimation
for this query you should to penalize nested loop
set enable_nestloop to off;
before evaluation of this query
You are not the only one with this issue. May I suggest to look at this
thread a little earlier th
On Wed, Nov 15, 2017 at 03:03:39PM +0530, Samir Magar wrote:
> I am having performance issues with one of the query.
> The query is taking 39 min to fetch 3.5 mil records.
>
> I want to reduce that time to 15 mins.
> could you please suggest something to its performance?
> "HashAggregate (cost=4
2017-11-15 13:54 GMT+01:00 Samir Magar :
> please find the EXPLAIN ANALYZE output.
>
> On Wed, Nov 15, 2017 at 3:13 PM, Pavel Stehule
> wrote:
>
>> Hi
>>
>> please send EXPLAIN ANALYZE output.
>>
>> Regards
>>
>> Pavel
>>
>> 2017-11-15 10:33 GMT+01:00 Samir Magar :
>>
>>> Hello,
>>> I am having p
please find the EXPLAIN ANALYZE output.
On Wed, Nov 15, 2017 at 3:13 PM, Pavel Stehule
wrote:
> Hi
>
> please send EXPLAIN ANALYZE output.
>
> Regards
>
> Pavel
>
> 2017-11-15 10:33 GMT+01:00 Samir Magar :
>
>> Hello,
>> I am having performance issues with one of the query.
>> The query is takin
Hi
please send EXPLAIN ANALYZE output.
Regards
Pavel
2017-11-15 10:33 GMT+01:00 Samir Magar :
> Hello,
> I am having performance issues with one of the query.
> The query is taking 39 min to fetch 3.5 mil records.
>
> I want to reduce that time to 15 mins.
> could you please suggest something
Hello,
I am having performance issues with one of the query.
The query is taking 39 min to fetch 3.5 mil records.
I want to reduce that time to 15 mins.
could you please suggest something to its performance?
server configuration:
CPUs = 4
memory = 16 GM
shared_buffers = 3 GB
work_mem = 100MB
eff
2017-02-23 17:45 GMT+01:00 Rowan Seymour :
> Not sure what other options we have other than an EAV approach since we
> allow users to define their own attribute types (attribute type is in
> contacts_contactfield, attribute value is in values_value). Would you
> expect modelling that with a JSON c
Not sure what other options we have other than an EAV approach since we
allow users to define their own attribute types (attribute type is in
contacts_contactfield, attribute value is in values_value). Would you
expect modelling that with a JSON column to perform better?
Thanks for the tips!
On 2
2017-02-23 15:02 GMT+01:00 Rowan Seymour :
> Hi Pavel. That suggestion gets me as far as LIMIT 694 with the fast plan
> then things get slow again. This is now what happens at LIMIT 695:
>
> Limit (cost=35945.78..50034.52 rows=695 width=88) (actual
> time=12852.580..12854.382 rows=695 loops=1)
>
Hi Pavel. That suggestion gets me as far as LIMIT 694 with the fast plan
then things get slow again. This is now what happens at LIMIT 695:
Limit (cost=35945.78..50034.52 rows=695 width=88) (actual
time=12852.580..12854.382 rows=695 loops=1)
Buffers: shared hit=6 read=66689
-> Merge Join (c
2017-02-23 14:11 GMT+01:00 Rowan Seymour :
> Hi guys
>
> I'm a bit stuck on a query that performs fantastically up to a certain
> limit value, after which the planner goes off in a completely different
> direction and performance gets dramatically worse. Am using Postgresql 9.3
>
> You can see all
Hi guys
I'm a bit stuck on a query that performs fantastically up to a certain
limit value, after which the planner goes off in a completely different
direction and performance gets dramatically worse. Am using Postgresql 9.3
You can see all the relevant schemas at http://pastebin.com/PNEqw2id an
Hi All,
I'm having some trouble improving the timing of a set of queries to a
partitioned table.
Basically, I'm trying to find an index that would be used instead of a
bitmap heap scan by when the data is taken from disk. Or in any case,
something that would make the process of retrieving the data
2015-01-31 2:40 GMT+01:00 Jim Nasby :
> On 1/25/15 2:03 AM, Pavel Stehule wrote:
>
>> It might not always be an integer, just happens to be so here.
>> Should I try text instead? I don't have to have the case-insensitive
>> matching.
>>
>>
>> text can be better
>>
>
> bytea would be ev
On 1/25/15 2:03 AM, Pavel Stehule wrote:
It might not always be an integer, just happens to be so here.
Should I try text instead? I don't have to have the case-insensitive
matching.
text can be better
bytea would be even better yet, because that will always be a straight
binary
>I have an events table that records page views and purchases (type = 'viewed'
>or type='purchased'). I have a query that figures out "people who
>bought/viewed this also bought/viewed that".
>
>It worked fine, taking about 0.1 seconds to complete, until a few hours ago
>when it started taking
Hi,
On 25.1.2015 07:38, Joe Van Dyk wrote:
>
> Here's one that's not quite as well: http://explain.depesz.com/s/SgT
As Pavel already pointed out, the first problem is this part of the plan:
Seq Scan on events e2 (cost=0.00..120,179.60 rows=4,450,241 width=51)
(actual time=0.014..33,773.370 rows
2015-01-25 8:20 GMT+01:00 Joe Van Dyk :
> On Sat, Jan 24, 2015 at 11:14 PM, Pavel Stehule
> wrote:
>
>>
>>
>> 2015-01-25 7:38 GMT+01:00 Joe Van Dyk :
>>
>>>
>>>
>>> On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule >> > wrote:
>>>
Hi
this plan looks well
Regards
P
On Sat, Jan 24, 2015 at 11:14 PM, Pavel Stehule
wrote:
>
>
> 2015-01-25 7:38 GMT+01:00 Joe Van Dyk :
>
>>
>>
>> On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule
>> wrote:
>>
>>> Hi
>>>
>>> this plan looks well
>>>
>>> Regards
>>>
>>> Pavel
>>>
>>
>> Here's one that's not quite as well: http://exp
2015-01-25 7:38 GMT+01:00 Joe Van Dyk :
>
>
> On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule
> wrote:
>
>> Hi
>>
>> this plan looks well
>>
>> Regards
>>
>> Pavel
>>
>
> Here's one that's not quite as well: http://explain.depesz.com/s/SgT
>
I see a possible issue
(product_id <> '81716'::citext
On Sat, Jan 24, 2015 at 10:12 PM, Pavel Stehule
wrote:
> Hi
>
> this plan looks well
>
> Regards
>
> Pavel
>
Here's one that's not quite as well: http://explain.depesz.com/s/SgT
Joe
>
> 2015-01-25 6:45 GMT+01:00 Joe Van Dyk :
>
>> Oops, didn't run vacuum analyze after deleting the events. Her
Hi
this plan looks well
Regards
Pavel
2015-01-25 6:45 GMT+01:00 Joe Van Dyk :
> Oops, didn't run vacuum analyze after deleting the events. Here is another
> 'explain analyze': http://explain.depesz.com/s/AviN
>
> On Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk wrote:
>
>> On Sat, Jan 24, 2015 at
Oops, didn't run vacuum analyze after deleting the events. Here is another
'explain analyze': http://explain.depesz.com/s/AviN
On Sat, Jan 24, 2015 at 9:43 PM, Joe Van Dyk wrote:
> On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk wrote:
>
>> I have an events table that records page views and purcha
On Sat, Jan 24, 2015 at 9:41 PM, Joe Van Dyk wrote:
> I have an events table that records page views and purchases (type =
> 'viewed' or type='purchased'). I have a query that figures out "people who
> bought/viewed this also bought/viewed that".
>
> It worked fine, taking about 0.1 seconds to co
I have an events table that records page views and purchases (type =
'viewed' or type='purchased'). I have a query that figures out "people who
bought/viewed this also bought/viewed that".
It worked fine, taking about 0.1 seconds to complete, until a few hours ago
when it started taking hours to c
2014-10-21 10:57 GMT-02:00 :
>
>
> Hi all,
>
> I'm experimenting with table partitioning though inheritance. I'm testing
> a query as follows:
>
> explain (analyze, buffers)
> select response.id
> from claim.response
> where response.account_id = 4766
> and response.expire_timestamp is null
> and
Hi all,I'm experimenting with table partitioning though inheritance. I'm testing a query as follows:explain (analyze, buffers)select response.idfrom claim.responsewhere response.account_id = 4766and response.expire_timestamp is nulland response.create_timestamp >= DATE '2014-08-01'order by create_t
---Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of David G Johnston
Sent: Tuesday, September 02, 2014 1:38 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] query performance with hstore vs. non-hs
Huang, Suya wrote
> See output of explain (analyze,timing off), the total runtime is close to
> the one enable timing.
Calling 43s "close to" 70s doesn't sound right...
> dev=# explain (analyze, timing off) select cha_type, sum(visits) from
> (select (each(visits)).key as cha_type,(each(visits))
.7382.59 rows=371759
width=47) (actual rows=371759 loops=1)
Total runtime: 69521.570 ms
(11 rows)
Thanks,
Suya
From: Pavel Stehule [mailto:pavel.steh...@gmail.com]
Sent: Monday, September 01, 2014 5:07 PM
To: Huang, Suya
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] query performa
...@gmail.com]
> *Sent:* Monday, September 01, 2014 4:22 PM
> *To:* Huang, Suya
> *Cc:* pgsql-performance@postgresql.org
> *Subject:* Re: [PERFORM] query performance with hstore vs. non-hstore
>
>
>
> Hi
>
> In this use case hstore should not help .. there is relative hig
, Suya
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] query performance with hstore vs. non-hstore
Hi
In this use case hstore should not help .. there is relative high overhead
related with unpacking hstore -- so classic schema is better.
Hstore should not to replace well normalized schema
Hi
In this use case hstore should not help .. there is relative high overhead
related with unpacking hstore -- so classic schema is better.
Hstore should not to replace well normalized schema - it should be a
replace for some semi normalized structures as EAV.
Hstore can have some profit from TO
Hi ,
I'm tweaking table layout to get better performance of query. One table doesn't
use hstore but expand all metrics of cha_type to different rows. The other
table has hstore for metrics column as cha_type->metrics so it has less records
than the first one.
I would be expecting the query on
-performance@postgresql.org
Subject: RE: [PERFORM] Query Performance question
On 14 Červenec 2014, 18:02, Magers, James wrote:
> Tomas,
>
> Thank you for the recommendation. In this case, The bitmap scan runs
> quite quickly, however in production were data may or may not be cached
>
On 14 Červenec 2014, 18:02, Magers, James wrote:
> Tomas,
>
> Thank you for the recommendation. In this case, The bitmap scan runs
> quite quickly, however in production were data may or may not be cached
> and at higher volumes I am trying to ensure the process will continue to
> execute efficien
Tomas,
Thank you for the recommendation. In this case, The bitmap scan runs quite
quickly, however in production were data may or may not be cached and at higher
volumes I am trying to ensure the process will continue to execute efficiently
and reduce the impact of the process on other process
On 14 Červenec 2014, 16:00, Magers, James wrote:
> Thomas,
>
> I would have to agree that the current results do indicate that. However,
> I have run this explain analyze multiple times and the timing varies from
> about 4ms to 35ms using the Bitmap Heap Scan. Here is an explain plan
> from Thurs
Thomas,
I would have to agree that the current results do indicate that. However, I
have run this explain analyze multiple times and the timing varies from about
4ms to 35ms using the Bitmap Heap Scan. Here is an explain plan from Thursday
of last week that shows about 21ms. Part of the issu
Magers, James, 14.07.2014 15:18:
> Thank you. I executed the query this morning after disabling the scan types.
>
> I am including links to explain.depesz output for each of the three
> variations that I executed.
>
> indexscan and bitmapscan off: http://explain.depesz.com/s/sIx
> seqscan a
Thomas,
Thank you. I executed the query this morning after disabling the scan types.
I am including links to explain.depesz output for each of the three variations
that I executed.
indexscan and bitmapscan off: http://explain.depesz.com/s/sIx
seqscan and bitmapscan off: http://explain.depes
Magers, James, 14.07.2014 04:20:
> Thank you for your feedback. I am attaching the requested information.
> While I do not think the query is necessarily inefficient, I believe a
> sequence scan would be more efficient.
You can try
set enable_indexscan = off;
set enable_bitmapscan = off;
Tomas,
Thank you for your feedback. I am attaching the requested information. While
I do not think the query is necessarily inefficient, I believe a sequence scan
would be more efficient.
\d member_subscription_d
Table "public.member_subscription_d"
On 14.7.2014 00:55, Magers, James wrote:
> I am using a Pentaho process to access the database and select the
> appropriate information to update the DB tables and records. I am
> trying to select the previous subscription key in order to update the
> factable for any records that have the previou
I am using a Pentaho process to access the database and select the appropriate
information to update the DB tables and records. I am trying to select the
previous subscription key in order to update the factable for any records that
have the previous key to have the current subscription key. Th
On Thu, Jun 13, 2013 at 12:49 AM, K P Manoj wrote:
> One of my query treating performance issue on my production server.
> Once i run query on my parent table with specific condition(hard coded
> value) its uses only proper child table and its index on explain plan ,
> but once i am using table
Hi All
One of my query treating performance issue on my production server.
Once i run query on my parent table with specific condition(hard coded
value) its uses only proper child table and its index on explain plan ,
but once i am using table conditions (instead of hard coded value), query
pla
On 09/06/2012 07:48 AM, Anibal David Acosta wrote:
Using explain analyze I saw that many of my queries run really fast,
less than 1 milliseconds, for example the analyze output of a simple
query over a table with 5millions of records return "Total runtime:
0.078 ms"
But the real time is a lot
Using explain analyze I saw that many of my queries run really fast, less
than 1 milliseconds, for example the analyze output of a simple query over a
table with 5millions of records return "Total runtime: 0.078 ms"
But the real time is a lot more, about 15 ms, in fact the pgadmin show this
v
On 30-12-2011 22:29, Tom Lane wrote:
I poked at this a little bit. AFAICS the only potentially relevant
planner change between 9.0.4 and 9.0.5 was the removal of eqjoinsel's
ndistinct-clamping heuristic,
http://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=3505862a8d3e3b389ab9263460
On 30-12-2011 19:35, Merlin Moncure wrote:
try this (curious):
create table pos as select n from generate_series(1,32) n;
and swap that for the in-query generate series call. your statistics
in the query are completely off (not 100% sure why), so I'm thinking
to replace that since it lies to th
Miguel Silva writes:
> I work for a software company that has it's main program installed on
> over 200 clients. This program uses a small local database in
> postgresql. Always installed with the one-click installer and
> postgresql.conf left on default settings. This structure allows us to
>
On Fri, Dec 30, 2011 at 10:39 AM, Miguel Silva wrote:
> Hi all!
>
> I've ran into a performance problem a few time ago and I've been trying to
> figure out a solution until now. But since I've failed to come up with
> anything conclusive, it's time to ask some help from people with more
> understa
On 30-12-2011 17:40, Tom Lane wrote:
Miguel Silva writes:
But with postgresql version 9.0.5 (in version 9.0.4 all was fine), we
noticed the program was taking longer to start. In fact, in some clients
that had older hardware, it could take around 20 minutes when it usually
takes only a few seco
Miguel Silva writes:
> But with postgresql version 9.0.5 (in version 9.0.4 all was fine), we
> noticed the program was taking longer to start. In fact, in some clients
> that had older hardware, it could take around 20 minutes when it usually
> takes only a few seconds. To make a long story sho
Hi all!
I've ran into a performance problem a few time ago and I've been trying
to figure out a solution until now. But since I've failed to come up
with anything conclusive, it's time to ask some help from people with
more understanding of how postgresql works.
Here's the big picture.
I wor
Based on my initial hunch that something resulting from all the ALTERS was
making PostgreSQL planner end up with bad plans, I tried a pg_dump and
pg_restore. Now the 'bad' query comes back in 70 seconds (compared to 20
minutes earlier) and the rewritten query still comes back in 2 seconds. So
we wi
Hello,
>
> If that does not help, you'll have to change the query probably. The
> problem is the explain analyze you've provided
> (http://explain.depesz.com/s/MY1) does not match the query from your
> yesterday's post so we can't really help with it.
Thanks for the pointers. I think I posted the
On 4 Září 2011, 20:06, Jayadevan wrote:
> I don't think I understood all that. Anyway, is there a way to fix this -
> either by rewriting the query or by creating an index? The output does
> match
> what I am expecting. It does take more than 10 times the time taken by
> Oracle for the same result
I don't think I understood all that. Anyway, is there a way to fix this -
either by rewriting the query or by creating an index? The output does match
what I am expecting. It does take more than 10 times the time taken by
Oracle for the same result, with PostgreSQL taking more than 20 minutes. I
a
"Kevin Grittner" writes:
> Thanks for posting the query and related schema. I tried working
> through it, but I keep coming back to this sort, and wondering how a
> sort can have 1121 rows as input and 2673340321 rows as output. Does
> anyone have any ideas on what could cause that?
Mergejoin r
Jayadevan M wrote:
> Here is the explain analyze
> http://explain.depesz.com/s/MY1
> PostgreSQL 9.0.4 on x86_64-pc-solaris2.10
> work_mem = 96MB
Thanks for posting the query and related schema. I tried working
through it, but I keep coming back to this sort, and wondering how a
sort can h
Order by ...upper(xyz), do you have functional index on these ?
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
Here goesI think it might be difficult to go through all these
definitions..
PRGMEMACCMST
Table "public.prgmemaccmst"
Column|Type | Modifiers
--+-+---
cmpcod | character varying(5)
Jayadevan M wrote:
>> And the schema of the tables involved, and any indexes on them.
> The details of the tables and indexes may take a bit of effort to
> explain. Will do that.
In psql you can do \d to get a decent summary.
Without seeing the query and the table definitions, it's hard t
>
> A really interesting part is the sort near the bottom -
>
> -> Sort (cost=1895.95..1896.49 rows=215 width=61) (actual
> time=25.926..711784.723 rows=2673340321 loops=1)
> Sort Key: memmst.memshpsta
> Sort Method: quicksort Memory: 206kB
> -> Nested Loop (cost=0.01..1887.62 r
On 31 Srpen 2011, 13:19, Jayadevan M wrote:
> Hello,
>
>> >
>> > Please run EXPLAIN ANALYZE on the query and post that, it's hard to
> say
>> > what's wrong from just the query plan, without knowing where the time
> is
>> > actually spent.
>> Here is the explain analyze
>> http://explain.depesz.com
Missed out looping in community...
On Wed, Aug 31, 2011 at 5:01 PM, Venkat Balaji wrote:
> Could you help us know the tables and columns on which Indexes are built ?
>
> Query is performing sorting based on key upper(column) and that is where i
> believe the cost is high.
>
> The 'upper' functio
Hello,
> >
> > Please run EXPLAIN ANALYZE on the query and post that, it's hard to
say
> > what's wrong from just the query plan, without knowing where the time
is
> > actually spent.
> Here is the explain analyze
> http://explain.depesz.com/s/MY1
Going through the url tells me that statis
Hello,
>
> Please run EXPLAIN ANALYZE on the query and post that, it's hard to say
> what's wrong from just the query plan, without knowing where the time is
> actually spent.
Here is the explain analyze
http://explain.depesz.com/s/MY1
Regards,
Jayadevan
DISCLAIMER:
"The information in
Hello,
> Please run EXPLAIN ANALYZE on the query and post that, it's hard to say
> what's wrong from just the query plan, without knowing where the time is
> actually spent. And the schema of the tables involved, and any indexes
> on them. (see also http://wiki.postgresql.org/wiki/SlowQueryQue
Where is the query? And also paste the \d to show the tables and
indexes.
-Sushant.
On Wed, 2011-08-31 at 14:30 +0530, Jayadevan M wrote:
> Hello all,
> I have a query which takes about 20 minutes to execute and retrieves
> 2000-odd records. The explain for the query is pasted here
> http://exp
On 31.08.2011 12:00, Jayadevan M wrote:
Hello all,
I have a query which takes about 20 minutes to execute and retrieves
2000-odd records. The explain for the query is pasted here
http://explain.depesz.com/s/52f
The same query, with similar data structures/indexes and data comes back
in 50 seconds
Hello all,
I have a query which takes about 20 minutes to execute and retrieves
2000-odd records. The explain for the query is pasted here
http://explain.depesz.com/s/52f
The same query, with similar data structures/indexes and data comes back
in 50 seconds in Oracle. We just ported the product t
Thanks for all valuable insights. I decided to drop the idea of adding
additional column and
will just rely on Date column for all ordering.
Tom - thanks for clear answer on the issue I was concerned about.
Maciek,Kevin -
thanks for ideas, hint on generate_series() - I will have to go through cpl
On Thu, Apr 28, 2011 at 12:17 AM, Kevin Grittner
wrote:
>
> Dhimant Patel wrote:
>
> > I am a new comer on postgres world and now using it for some
> > serious (at least for me) projects. I have a need where I am
> > running some analytical + aggregate functions on data where
> > ordering is don
Dhimant Patel wrote:
> I am a new comer on postgres world and now using it for some
> serious (at least for me) projects. I have a need where I am
> running some analytical + aggregate functions on data where
> ordering is done on Date type column.
>
> From my initial read on documentation I b
> This makes me wonder would it make any good to create additional column of
> Integer type and update it as data gets added and use this integer column for
> all ordering purposes for my sqls - or should I not hasitate using Date type
> straight into my sql for ordering?
Keep in mind what Mic
Dhimant Patel writes:
> From my initial read on documentation I believe internally a date type is
> represented by integer type of data. This makes me wonder would it make any
> good to create additional column of Integer type and update it as data gets
> added and use this integer column for all
Hi All,
I am a new comer on postgres world and now using it for some serious (at
least for me) projects. I have a need where I am running some analytical +
aggregate functions on data where ordering is done on Date type column.
>From my initial read on documentation I believe internally a date t
On Tue, Mar 8, 2011 at 4:24 PM, Tom Lane wrote:
> Robert Haas writes:
>> The reason I thought cross-column correlations might be relevant is
>> that the bitmap index scan on news_visible_from is quite accurate
>> (19976 estimated vs. 19932 actual) and the bitmap index scan on
>> news_visible_to i
On Tue, Mar 8, 2011 at 2:57 PM, Robert Haas wrote:
> On Mon, Mar 7, 2011 at 3:40 PM, Merlin Moncure wrote:
>> On Tue, Feb 22, 2011 at 9:07 PM, Robert Haas wrote:
>>> On Fri, Feb 4, 2011 at 7:08 AM, Ivan Voras wrote:
-> BitmapAnd (cost=1282.94..1282.94
Robert Haas writes:
> The reason I thought cross-column correlations might be relevant is
> that the bitmap index scan on news_visible_from is quite accurate
> (19976 estimated vs. 19932 actual) and the bitmap index scan on
> news_visible_to is tolerably accurate (151 estimated vs. 41 actual)
> bu
On Mon, Mar 7, 2011 at 3:40 PM, Merlin Moncure wrote:
> On Tue, Feb 22, 2011 at 9:07 PM, Robert Haas wrote:
>> On Fri, Feb 4, 2011 at 7:08 AM, Ivan Voras wrote:
>>> -> BitmapAnd (cost=1282.94..1282.94
>>> rows=1430 width=0) (actual time=5.508..5.508 rows=0 loops
On Tue, Feb 22, 2011 at 9:07 PM, Robert Haas wrote:
> On Fri, Feb 4, 2011 at 7:08 AM, Ivan Voras wrote:
>> -> BitmapAnd (cost=1282.94..1282.94
>> rows=1430 width=0) (actual time=5.508..5.508 rows=0 loops=1)
>> -> Bitmap Inde
On Fri, Feb 4, 2011 at 7:08 AM, Ivan Voras wrote:
> -> BitmapAnd (cost=1282.94..1282.94
> rows=1430 width=0) (actual time=5.508..5.508 rows=0 loops=1)
> -> Bitmap Index Scan on
> news_index_layout_id_state (cost=0.00..150.14
Sorry for the misunderstaning: of course not default "normal" settings; shared
buffers, work mem, wal segments and others have been tuned according to
available hardware (e.g. 4 GB, 32 MB, 10 for these settings, respectively). I
meant "planner default settings" in the post.
--
Sent from my Andr
On 04/02/2011 15:44, Greg Smith wrote:
Ivan Voras wrote:
The "vanilla" plan, with default settings is:
Pause here for a second: why default settings? A default PostgreSQL
configuration is suitable for systems with about 128MB of RAM. Since you
say you have "good enough hardware", I'm assuming
Ivan Voras wrote:
The "vanilla" plan, with default settings is:
Pause here for a second: why default settings? A default PostgreSQL
configuration is suitable for systems with about 128MB of RAM. Since
you say you have "good enough hardware", I'm assuming you have a bit
more than that. Th
I'm running all this on a 9.0 server with good enough hardware. The
query is:
SELECT news.id AS news_id
, news.layout_id
, news.news_relation_id
, news.author_id
On Mon, Nov 22, 2010 at 7:12 PM, Humair Mohammed wrote:
> I did some further analysis and here are the results:
> work_mem;response_time
> 1MB;62 seconds
> 2MB;2 seconds
> 4MB;700 milliseconds
> 8MB;550 milliseconds
> In all cases shared_buffers were set to the default value of 32MB. As you
> can
x27;t need this
to be any higher than 8 or 16 MB. Thanks to all for help!
Humair
> Date: Mon, 22 Nov 2010 12:00:15 +0100
> Subject: Re: [PERFORM] Query Performance SQL Server vs. Postgresql
> From: t...@fuzzy.cz
> To: huma...@hotmail.com
> CC: pgsql-performance@postgresql.org
>
&
> I believe you can set work_mem to a different value just for the duration
> of
> a single query, so you needn't have work_mem set so high if for every
> query
> on the system. A single query may well use a multiple of work_mem, so you
> really probably don't want it that high all the time unless
On Sun, Nov 21, 2010 at 10:21 PM, Humair Mohammed wrote:
>
> Correct, the optimizer did not take the settings with the pg_ctl reload
> command. I did a pg_ctl restart and work_mem now displays the updated value.
> I had to bump up all the way to 2047 MB to get the response below (with
> work_mem a
>
>
> Correct, the optimizer did not take the settings with the pg_ctl reload
> command. I did a pg_ctl restart and work_mem now displays the updated
> value. I had to bump up all the way to 2047 MB to get the response below
> (with work_mem at 1024 MB I see 7 seconds response time) and with 2047 M
->
Seq Scan on pivotgood pg (cost=0.00..7537.12 rows=251212 width=134) (actual
time=0.004..26.242 rows=251212 loops=1)"" Buffers: shared
hit=5025""Total runtime: 331.168 ms"
Humair
> CC: t...@fuzzy.cz; huma...@hotmail.com; pavel.steh...@gmail.com;
> p
That was a typo:
work_mem = 2GBshared_buffers = 2GB
> From: pavel.steh...@gmail.com
> Date: Sun, 21 Nov 2010 12:38:43 +0100
> Subject: Re: [PERFORM] Query Performance SQL Server vs. Postgresql
> To: huma...@hotmail.com
> CC: pgsql-performance@postgresql.org
>
> 2010/
l Stehule
>> From: pavel.steh...@gmail.com
>> Date: Sun, 21 Nov 2010 12:38:43 +0100
>> Subject: Re: [PERFORM] Query Performance SQL Server vs. Postgresql
>> To: huma...@hotmail.com
>> CC: pgsql-performance@postgresql.org
>>
>> 2010/11/21 Humair Mohammed :
On Nov 21, 2010, at 12:16 PM, Tom Lane wrote:
> t...@fuzzy.cz writes:
>>> Second, I modified the work_mem setting to 2GB (reloaded config) and I see
>>> a response time of 38 seconds. Results below from EXPLAIN ANALYZE:
>
>> How did you reload the config? Using 'kill -HUP pid'? That should work
>
1 - 100 of 227 matches
Mail list logo