Re: [PERFORM] planer chooses very bad plan

2010-04-12 Thread Pierre C


explain analyze SELECT * FROM "telegrams" WHERE ((recipient_id=508933  
AND recipient_deleted=FALSE) OR (user_id=508933 AND user_deleted=FALSE))  
ORDER BY id DESC LIMIT 10 OFFSET 0


If you need very fast performance on this query, you need to be able to  
use the index for ordering.


Note that the following query will only optimize the first page of results  
in the case you want to display BOTH sent and received telegrams.



- Create an index on (recipient_id, id) WHERE NOT recipient_deleted
- Create an index on (user_id, id) WHERE NOT user_deleted
- Drop redundant indexes (recipient_id) and (user_id)

SELECT * FROM (
SELECT * FROM "telegrams" WHERE recipient_id=508933 AND  
recipient_deleted=FALSE ORDER BY id DESC LIMIT 10

UNION ALL
SELECT * FROM "telegrams" WHERE user_id=508933 AND user_deleted=FALSE  
ORDER BY id DESC LIMIT 10

) AS foo ORDER BY id DESC LIMIT 10;

These indexes will also optimize the queries where you only display the  
inbox and outbox, in which case it will be able to use the index for  
ordering on any page, because there will be no UNION.



--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] significant slow down with various LIMIT

2010-04-12 Thread Helio Campos Mello de Andrade

Andrey,

- Another idea for your problem is the one Kevin gave in the message following:

##


> SELECT * FROM t_route
>   WHERE t_route.route_type_fk = 1
>   limit 4;
  


This one scanned the t_route table until it found four rows that
matched.  It apparently didn't need to look at very many rows to find
the four matches, so it was fast.



> SELECT * FROM t_route
>   WHERE t_route.route_type_fk =
> (SELECT id FROM t_route_type WHERE type = 2)
>   limit 4;
  


This one came up with an id for a route type that didn't have any
matches in the t_route table, so it had to scan the entire t_route
table.  (Based on your next query, the subquery probably returned
NULL, so there might be room for some optimization here.)  If you had
chosen a route type with at least four matches near the  start of the
route table, this query would have completed quickly.



> SELECT * FROM t_route, t_route_type
>   WHERE t_route.route_type_fk = t_route_type.id
> AND type = 2
>   limit 4;
  


Since it didn't find any t_route_type row which matched, it knew
there couldn't be any output from the JOIN, so it skipped the scan of
the t_route table entirely.

-Kevin
## 



Regards

 Original Message 
From:   - Fri Apr 9 17:36:41 2010
X-Account-Key:  account3
X-UIDL: GmailId127e449663a13d39
X-Mozilla-Status:   0011
X-Mozilla-Status2:  
X-Mozilla-Keys: 
Delivered-To:   helio.cam...@gmail.com
Received: 	by 10.231.79.67 with SMTP id o3cs40933ibk; Fri, 9 Apr 2010 
13:36:16 -0700 (PDT)
Received: 	by 10.114.248.22 with SMTP id v22mr967398wah.8.1270845368202; 
Fri, 09 Apr 2010 13:36:08 -0700 (PDT)

Return-Path:
Received: 	from maia-1.hub.org (maia-1.hub.org [200.46.208.211]) by 
mx.google.com with ESMTP id 8si1947813ywh.11.2010.04.09.13.36.07; Fri, 
09 Apr 2010 13:36:08 -0700 (PDT)
Received-SPF: 	neutral (google.com: 200.46.208.211 is neither permitted 
nor denied by best guess record for domain of 
pgsql-performance-owner+m38...@postgresql.org) client-ip=200.46.208.211;
Authentication-Results: 	mx.google.com; spf=neutral (google.com: 
200.46.208.211 is neither permitted nor denied by best guess record for 
domain of pgsql-performance-owner+m38...@postgresql.org) 
smtp.mail=pgsql-performance-owner+m38...@postgresql.org
Received: 	from postgresql.org (mail.postgresql.org [200.46.204.86]) by 
maia-1.hub.org (Postfix) with ESMTP id 54BAEAFD1B6; Fri, 9 Apr 2010 
20:36:00 + (UTC)
Received: 	from maia.hub.org (unknown [200.46.204.183]) by 
mail.postgresql.org (Postfix) with ESMTP id 2E74B633047 for 
; Thu, 8 Apr 2010 
22:36:17 -0300 (ADT)
Received: 	from mail.postgresql.org ([200.46.204.86]) by maia.hub.org 
(mx1.hub.org [200.46.204.183]) (amavisd-maia, port 10024) with ESMTP id 
90832-06 for ; 
Fri, 9 Apr 2010 01:36:06 + (UTC)
Received: 	from news.hub.org (news.hub.org [200.46.204.72]) by 
mail.postgresql.org (Postfix) with ESMTP id BBD50632DC3 for 
; Thu, 8 Apr 2010 22:36:06 -0300 (ADT)
Received: 	from news.hub.org (news.hub.org [200.46.204.72]) by 
news.hub.org (8.14.3/8.14.3) with ESMTP id o391a091050073 for 
; Thu, 8 Apr 2010 22:36:00 -0300 (ADT) 
(envelope-from n...@news.hub.org)
Received: 	(from n...@localhost) by news.hub.org (8.14.3/8.14.3/Submit) 
id o391DTvp041710 for pgsql-performance@postgresql.org; Thu, 8 Apr 2010 
22:13:29 -0300 (ADT) (envelope-from news)

From:   norn 
X-Newsgroups:   pgsql.performance
Subject:Re: [PERFORM] significant slow down with various LIMIT
Date:   Thu, 8 Apr 2010 18:13:33 -0700 (PDT)
Organization:   http://groups.google.com
Lines:  72
Message-ID: 
<8ae12099-1cbb-40d5-b7fc-c15b8deba...@30g2000yqi.googlegroups.com>
References: 
<9587baca-c902-4215-9863-7043802ec...@10g2000yqq.googlegroups.com> 
<4bbdc19a022500030...@gw.wicourts.gov>

Mime-Version:   1.0
Content-Type:   text/plain; charset=ISO-8859-1
Content-Transfer-Encoding:  quoted-printable
X-Complaints-To:groups-ab...@google.com
Complaints-To:  groups-ab...@google.com
Injection-Info: 	30g2000yqi.googlegroups.com; 
posting-host=94.78.201.171; 
posting-account=woDzKwoAAACEqYut1Qq-BHNhLOB-6ihP

User-Agent: G2/1.0
X-HTTP-UserAgent: 	Mozilla/5.0 (X11; U; Linux x86_64; en-US) 
AppleWebKit/533.4 (KHTML, like Gecko) Chrome/5.0.368.0 
Safari/533.4,gzip(gfe)

To: pgsql-performance@postgresql.org
X-Virus-Scanned:Maia Mailguard 1.0.1
X-Spam-Status: 	No, hits=-0.74 tagged_above=-10 required=5 
tests=BAYES_20=-0.74

X-Spam-Level:   
X-Mailing-List: pgsql-performance
List-Archive:   
List-Help:  
List-ID:
List-Owner: 
List-Post:  

Re: [PERFORM] significant slow down with various LIMIT

2010-04-12 Thread Kevin Grittner
[rearranging to put related information together]
 
norn 
 
Since the LIMIT 3 and LIMIT 4 queries generated exactly the same
plan, the increased time for LIMIT 4 suggests that there are 3
matching rows which are near the end of the index it is scanning, but
the fourth one is much farther in.
 
Since what you're showing suggests that the active portion of your
data is heavily cached, you might benefit from decreasing
random_page_cost, and possibly also seq_page_cost.
 
> 8GB RAM
 
> effective_cache_size = 1536MB
 
> Please also note that this hardware isn't dedicated DB server, but
> also serve as web server and file server.
 
Even with those other uses, you're likely to actually be using 6 GB
or 7 GB for cache.  I'd set effective_cache_size in that range.
 
> max_connections = 250
> work_mem = 128MB
 
While probably not related to this problem, that's a dangerous
combination.  What if all 250 connections are active with a query
which uses work_mem memory?  A single connection can actually be
using several work_mem allocations at once.
 
> 2 SATA 750GB (pg db installed in software RAID 0)
 
You do realize that if either drive dies  you lose all your data on
that pair of drives, right?  I hope the value of the data and well
tested backup procedures keeps the loss to something which is
acceptable.
 
> I have about 3 million rows in core_object, 1.5 million in
> plugin_plugin_addr and 1.5 million in plugins_guide_address.
> When there were 300 000+ objects queries works perfectly, but as db
> enlarge things go worse...
 
With a relational database, it's not unusual for the most efficient
plan to depend on the quantity of data in the tables.  It is
important that your statistics are kept up-to-date so that plans can
adapt to the changing table sizes or data distributions.  The
effective_cache_size and cost parameters are also used to calculate
the costs of various plans, so adjusting those may help the optimizer
make good choices.
 
-Kevin


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] function performs differently with different values

2010-04-12 Thread Robert Haas
On Sat, Apr 10, 2010 at 4:47 PM, Ben Chobot  wrote:
> My understanding is that this generally happens because the plan should be 
> different for the different values, but the first time the function is run it 
> caches the plan for one of the values and will never use the appropriate plan 
> for the second value.

No, it plans based on a sort of "generic value", not the first one you
supply.  The way to get at that plan is:

PREPARE foo AS ;
EXPLAIN EXECUTE foo (parameters);

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How check execution plan of a function

2010-04-12 Thread Kevin Grittner
"Sabin Coanda"  wrote:
 
> I have just a function returning a cursor based on a single coplex
> query. When I check the execution plan of that query it takes
> about 3 seconds. Just when it is used inside the function it
> freezes.
> 
> This is the problem, and this is the reason I cannot imagine what
> is happen. 
> 
> Also I tried to recreate the function as it was before when it run
> in 3 seconds, but I cannot make it to run properly now. 
 
You've given three somewhat confusing and apparently contradictory
descriptions of the problem or problems you've had with slow queries
-- all on one thread.  You would probably have better luck if you
start with one particular issue and provided more of the information
suggested here:
 
http://wiki.postgresql.org/wiki/SlowQueryQuestions
 
If we can solve one problem, perhaps the resolution to the others
will become apparent; otherwise follow up with the next.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] significant slow down with various LIMIT

2010-04-12 Thread norn
On Apr 10, 6:48 am, robertmh...@gmail.com (Robert Haas) wrote:
> On Tue, Apr 6, 2010 at 8:42 PM, norn  wrote:
> > I have some mysterious slow downs with ORDER BY and LIMIT. When LIMIT
> > getting greater than some value (greater than 3 in my case), query
> > takes 4-5 secs instead of 0.25ms. All of the necessary indexes are in
> > place. I have no idea what to do, so any advices are welcome!
>
> > Here my queries and explain analyzes;
>
> > First Query with LIMIT 3 (fast)
> > -
> > explain analyze SELECT core_object.id from "core_object" INNER JOIN
> > "plugins_plugin_addr" ON ("core_object"."id" =
> > "plugins_plugin_addr"."oid_id") INNER JOIN "plugins_guide_address" ON
> > ("plugins_plugin_addr"."address_id" = "plugins_guide_address"."id")
> > WHERE "plugins_guide_address"."city_id" = 4535  ORDER BY
> > "core_object"."id" DESC LIMIT 3;
>
> >  Limit  (cost=0.00..9.57 rows=3 width=4) (actual time=0.090..0.138
> > rows=3 loops=1)
> >   ->  Merge Join  (cost=0.00..1098182.56 rows=344125 width=4) (actual
> > time=0.088..0.136 rows=3 loops=1)
> >         Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)
> >         ->  Nested Loop  (cost=0.00..972804.02 rows=344125 width=4)
> > (actual time=0.056..0.095 rows=3 loops=1)
> >               ->  Index Scan Backward using
> > plugins_plugin_addr_oid_id on plugins_plugin_addr
> > (cost=0.00..52043.06 rows=1621103 width=8) (actual time=0.027..0.032
> > rows=3 loops=1)
> >               ->  Index Scan using plugins_guide_address_pkey on
> > plugins_guide_address  (cost=0.00..0.56 rows=1 width=4) (actual
> > time=0.017..0.018 rows=1 loops=3)
> >                     Index Cond: (plugins_guide_address.id =
> > plugins_plugin_addr.address_id)
> >                     Filter: (plugins_guide_address.city_id = 4535)
> >         ->  Index Scan using core_object_pkey_desc on core_object
> > (cost=0.00..113516.08 rows=3091134 width=4) (actual time=0.026..0.028
> > rows=3 loops=1)
> >  Total runtime: 0.244 ms
> > (10 rows)
>
> > Second Query, the same, but with LIMIT 4 (slow)
> > -
> > explain analyze SELECT core_object.id from "core_object" INNER JOIN
> > "plugins_plugin_addr" ON ("core_object"."id" =
> > "plugins_plugin_addr"."oid_id") INNER JOIN "plugins_guide_address" ON
> > ("plugins_plugin_addr"."address_id" = "plugins_guide_address"."id")
> > WHERE "plugins_guide_address"."city_id" = 4535  ORDER BY
> > "core_object"."id" DESC LIMIT 4;
>
> >  Limit  (cost=0.00..12.76 rows=4 width=4) (actual time=0.091..4436.795
> > rows=4 loops=1)
> >   ->  Merge Join  (cost=0.00..1098182.56 rows=344125 width=4) (actual
> > time=0.089..4436.791 rows=4 loops=1)
> >         Merge Cond: (plugins_plugin_addr.oid_id = core_object.id)
> >         ->  Nested Loop  (cost=0.00..972804.02 rows=344125 width=4)
> > (actual time=0.056..3988.249 rows=4 loops=1)
> >               ->  Index Scan Backward using
> > plugins_plugin_addr_oid_id on plugins_plugin_addr
> > (cost=0.00..52043.06 rows=1621103 width=8) (actual time=0.027..329.942
> > rows=1244476 loops=1)
> >               ->  Index Scan using plugins_guide_address_pkey on
> > plugins_guide_address  (cost=0.00..0.56 rows=1 width=4) (actual
> > time=0.003..0.003 rows=0 loops=1244476)
> >                     Index Cond: (plugins_guide_address.id =
> > plugins_plugin_addr.address_id)
> >                     Filter: (plugins_guide_address.city_id = 4535)
> >         ->  Index Scan using core_object_pkey_desc on core_object
> > (cost=0.00..113516.08 rows=3091134 width=4) (actual
> > time=0.027..284.195 rows=1244479 loops=1)
> >  Total runtime: 4436.894 ms
> > (10 rows)
>
> What do you get with no LIMIT at all?
>
> ...Robert
>
> --
> Sent via pgsql-performance mailing list (pgsql-performa...@postgresql.org)
> To make changes to your 
> subscription:http://www.postgresql.org/mailpref/pgsql-performance

Without using limit query takes 5-6 seconds, but I have to get only a
couple of last rows with a cost of 200-300ms

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Some question

2010-04-12 Thread Ľubomír Varga
Hi, here are they:


select * from t_route_type;

ID;description;type
1;"stojim";0
2;"idem";1




explain analyze
SELECT * FROM t_route
WHERE t_route.route_type_fk = 1
limit 4;

"Limit  (cost=0.00..0.88 rows=4 width=2640) (actual time=23.352..23.360 rows=4 
loops=1)"
"  ->  Seq Scan on t_route  (cost=0.00..120497.00 rows=549155 width=2640) 
(actual time=23.350..23.354 rows=4 loops=1)"
"Filter: (route_type_fk = 1)"
"Total runtime: 23.404 ms"




explain analyze
SELECT * FROM t_route
WHERE t_route.route_type_fk = (SELECT id FROM t_route_type WHERE type = 
2)
limit 4;

"Limit  (cost=1.02..1.91 rows=4 width=2640) (actual 
time=267243.019..267243.019 rows=0 loops=1)"
"  InitPlan"
"->  Seq Scan on t_route_type  (cost=0.00..1.02 rows=1 width=8) (actual 
time=0.006..0.006 rows=0 loops=1)"
"  Filter: ("type" = 2)"
"  ->  Seq Scan on t_route  (cost=0.00..120498.12 rows=545885 width=2640) 
(actual time=267243.017..267243.017 rows=0 loops=1)"
"Filter: (route_type_fk = $0)"
"Total runtime: 267243.089 ms"




explain analyze
SELECT * FROM t_route, t_route_type
WHERE t_route.route_type_fk = t_route_type.id AND
type = 2
limit 4;

"Limit  (cost=0.00..0.96 rows=4 width=2661) (actual time=0.013..0.013 rows=0 
loops=1)"
"  ->  Nested Loop  (cost=0.00..131415.62 rows=545880 width=2661) (actual 
time=0.012..0.012 rows=0 loops=1)"
"Join Filter: (t_route.route_type_fk = t_route_type.id)"
"->  Seq Scan on t_route_type  (cost=0.00..1.02 rows=1 width=21) 
(actual time=0.011..0.011 rows=0 loops=1)"
"  Filter: ("type" = 2)"
"->  Seq Scan on t_route  (cost=0.00..117767.60 rows=1091760 
width=2640) (never executed)"
"Total runtime: 0.054 ms"



So I found solution. It is third select, where is used join instead of inner 
select to get ID for some constant from t_route_type.

t_route is table with routes taken by some car. It have same strings as 
columns and one geometry column with line of travelled path. Type of route is 
in t_route_type and it could be "travelling" and "standing" type. In my 
select I want to select some routes which are type "travelling" (type = 1, id 
= 2). It is only sample select.

Please explain me why second query had taken so long to finish.

Have a nice day.


On Wednesday 07 April 2010 00:11:48 Kevin Grittner wrote:
> *ubomír Varga wrote:
> > Hi, stright to my "problem":
>
> Please show the exact problem query and the results of running it
> with EXPLAIN ANALYZE, along with the other information suggested
> here:
>
> http://wiki.postgresql.org/wiki/SlowQueryQuestions
>
> -Kevin

-- 
Odborník na všetko je zlý odborník. Ja sa snažím byť výnimkou potvrdzujúcou 
pravidlo.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] planer chooses very bad plan

2010-04-12 Thread Hannu Krosing
On Sun, 2010-04-11 at 23:12 +0200, Corin wrote:
> Hi,
> 
> I'm having a query where the planer chooses a very bad plan.
> 
> explain analyze SELECT * FROM "telegrams" WHERE ((recipient_id=508933 
> AND recipient_deleted=FALSE) OR (user_id=508933 AND user_deleted=FALSE)) 
> ORDER BY id DESC LIMIT 10 OFFSET 0
> 
> "Limit  (cost=0.00..1557.67 rows=10 width=78) (actual 
> time=0.096..2750.058 rows=5 loops=1)"
> "  ->  Index Scan Backward using telegrams_pkey on telegrams  
> (cost=0.00..156545.47 rows=1005 width=78) (actual time=0.093..2750.052 
> rows=5 loops=1)"
> "Filter: (((recipient_id = 508933) AND (NOT recipient_deleted)) 
> OR ((user_id = 508933) AND (NOT user_deleted)))"
> "Total runtime: 2750.124 ms"

You could check if creating special deleted_x indexes helps

do

CREATE INDEX tgrm_deleted_recipent_index ON telegrams(recipient_id)
 WHERE recipient_deleted=FALSE;

CREATE INDEX tgrm_deleted_user_index ON telegrams(user_id) 
 WHERE user_deleted=FALSE;

(if on live system, use "CREATE INDEX CONCURRENTLY ...")

-- 
Hannu Krosing   http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability 
   Services, Consulting and Training



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] 答复: [PERFORM] About “context-switchin g issue on Xeon” test case ?

2010-04-12 Thread RD黄永卫
Thank you for you reply!

 

“one of the indicators of that is that context switches per second will 
start to jump up and the machine gets

Sluggish”

 

--> Here is my database server indicator:  



These is ther VMSTAT  log of my database server as below:

 

2010-04-07 04:03:15 procs  memory  swap  io 
system cpu

2010-04-07 04:03:15  r  b   swpd   free   buff  cache   si   sobibo   
incs us sy id wa

2010-04-07 14:04:27  3  0  0 2361272 272684 309614800 3  1445  
973 14230  7  8 84  0

2010-04-07 14:05:27  2  0  0 2361092 272684 309622000 3  1804 
1029 31852  8 10 81  1

2010-04-07 14:06:27  1  0  0 2362236 272684 309656400 3  1865 
1135 19689  9  9 81  0

2010-04-07 14:07:27  1  0  0 2348400 272720 310183600 3  1582 
1182 149461 15 17 67  0

2010-04-07 14:08:27  3  0  0 2392028 272840 310760000 3  3093 
1275 203196 24 23 53  1

2010-04-07 14:09:27  3  1  0 2386224 272916 310796000 3  2486 
1331 193299 26 22 52  0

2010-04-07 14:10:27 34  0  0 2332320 272980 310794400 3  1692 
1082 214309 24 22 54  0

2010-04-07 14:11:27  1  0  0 2407432 273028 310809200 6  2770 
1540 76643 29 13 57  1

2010-04-07 14:12:27  9  0  0 2358968 273104 310838800 7  2639 
1466 10603 22  6 72  1

 

My postgres version: 8.1.3; 

My OS version: Linux version 2.4.21-47.Elsmp((Red Hat Linux 3.2.3-54)

My CPU:

processor   : 7

vendor_id   : GenuineIntel

cpu family  : 15

model   : 6

model name  : Intel(R) Xeon(TM) CPU 3.40GHz

stepping: 8

cpu MHz : 3400.262

cache size  : 1024 KB

physical id : 1

 

 

I donnt know what make the “context-switching”  storm ? 

How should I investigatethe real reason ?

Could you please give me some advice ? 

 

Thank you !

 

-邮件原件-
发件人: Scott Marlowe [mailto:scott.marl...@gmail.com] 
发送时间: 2010年4月10日 13:05
收件人: Greg Smith
抄送: RD黄永卫; pgsql-performance@postgresql.org
主题: Re: [PERFORM] About “context-switching issue on Xeon” test case ?

 

2010/4/9 Greg Smith :

> RD黄永卫 wrote:

>> 

>> Anybody have the test case of “ context-switching issue on Xeon” from

>> Tm lane ?

>> 

> 

> That takes me back:

> http://archives.postgresql.org/pgsql-performance/2004-04/msg00280.php

> 

> That's a problem seen on 2004 era Xeon processors, and with PostgreSQL

> 7.4. I doubt it has much relevance nowadays, given a) that whole area of

> the code was rewritten for PostgreSQL 8.1, and b) today's Xeons are

> nothing like 2004's Xeons.

 

It's important to appreciate that all improvements in scalability for

xeons, opterons, and everything else has mostly just moved further

along to the right on the graph where you start doing more context

switching than work, and the performance falls off.  The same way that

(sometimes) throwing more cores at a problem can help.  For most

office sized pgsql servers there's still a real possibility of having

a machine getting slammed and one of the indicators of that is that

context switches per second will start to jump up and the machine gets

sluggish.

 

For 2 sockets Intel rules the roost.  I'd imagine AMD's much faster

bus architecture for >2 sockets would make them the winner, but I

haven't had a system like that to test, either Intel or AMD.

 



[PERFORM] How to diagnose a “context-switching ” storm problem ?

2010-04-12 Thread RD黄永卫
Hi,

My database server get sluggish suddenly ,I check the vmstat as below:

 

2010-04-07 04:03:15 procs  memory  swap  io 
system cpu

2010-04-07 04:03:15  r  b   swpd   free   buff  cache   si   sobibo   
incs us sy id wa

2010-04-07 14:04:27  3  0  0 2361272 272684 309614800 3  1445  
973 14230  7  8 84  0

2010-04-07 14:05:27  2  0  0 2361092 272684 309622000 3  1804 
1029 31852  8 10 81  1

2010-04-07 14:06:27  1  0  0 2362236 272684 309656400 3  1865 
1135 19689  9  9 81  0

2010-04-07 14:07:27  1  0  0 2348400 272720 310183600 3  1582 
1182 149461 15 17 67  0

2010-04-07 14:08:27  3  0  0 2392028 272840 310760000 3  3093 
1275 203196 24 23 53  1

2010-04-07 14:09:27  3  1  0 2386224 272916 310796000 3  2486 
1331 193299 26 22 52  0

2010-04-07 14:10:27 34  0  0 2332320 272980 310794400 3  1692 
1082 214309 24 22 54  0

2010-04-07 14:11:27  1  0  0 2407432 273028 310809200 6  2770 
1540 76643 29 13 57  1

2010-04-07 14:12:27  9  0  0 2358968 273104 310838800 7  2639 
1466 10603 22  6 72  1

 

   My postgres version: 8.1.3; 

My OS version: Linux version 2.4.21-47.Elsmp((Red Hat Linux 3.2.3-54)

My CPU:

processor   : 7

vendor_id   : GenuineIntel

cpu family  : 15

model   : 6

model name  : Intel(R) Xeon(TM) CPU 3.40GHz

stepping: 8

cpu MHz : 3400.262

cache size  : 1024 KB

physical id : 1

 

 

I donnt know what make the “context-switching”  storm ? 

How should I investigatethe real reason ?

Could you please give me some advice ? 

 

Best regards,

Ray Huang

 

 



Re: [PERFORM] significant slow down with various LIMIT

2010-04-12 Thread norn
Kevin,
I appreciate your help very much!

> Since the LIMIT 3 and LIMIT 4 queries generated exactly the same
> plan, the increased time for LIMIT 4 suggests that there are 3
> matching rows which are near the end of the index it is scanning, but
> the fourth one is much farther in.
Yes, you are right, I checked id of the rows and found that problem
occurred when one of id is 1377077 and next one is 132604.
This table was modified with several new rows and the problem now
happens between limit 4 and 5, this is another evidence of your
rightness!

Followed by your advices I set the following:
effective_cache_size=6144MB
random_page_cost=0.25
seq_page_cost = 0.25
max_connections = 50 # thanks for pointing! I don't really need so
much

> > 2 SATA 750GB (pg db installed in software RAID 0)
>
> You do realize that if either drive dies  you lose all your data on
> that pair of drives, right?  I hope the value of the data and well
> tested backup procedures keeps the loss to something which is
> acceptable.
Thanks for attention! I have some regular backup procedures already,
so there are no extra risk related to drive failure...

I restarted Postgresql with new settings and got no performance
improvements in this particular query...
Do you have ideas how much random_page_cost and seq_page_cost should
be decreased?
Also I wish to notice, that I made REINDEX DATABASE while tried to
solve the problem by myself.. this doesn't help...


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Re: [PERFORM] How to diagnose a “context-switching ” storm problem ?

2010-04-12 Thread Sergey Konoplev
2010/4/12 RD黄永卫 :
> I donnt know what make the "context-switching"  storm ?
>
> How should I investigate  the real reason ?
>
> Could you please give me some advice ?

It might be because of cascading locks so try to monitor them when it happens.

You may find this query useful:

SELECT
granted,
count(1) AS locks,
pid,
now() - xact_start AS xact_age,
now() - query_start AS query_age,
current_query
FROM
pg_locks AS l
LEFT JOIN pg_stat_activity AS a ON
pid = procpid
GROUP BY 1, 3, 4, 5, 6
ORDER BY 1 DESC, 2 DESC
-- ORDER BY 4 DESC
LIMIT 100;


-- 
Sergey Konoplev

Blog: http://gray-hemp.blogspot.com /
Linkedin: http://ru.linkedin.com/in/grayhemp /
JID/GTalk: gray...@gmail.com / Skype: gray-hemp / ICQ: 29353802

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] significant slow down with various LIMIT

2010-04-12 Thread Kevin Grittner
norn  wrote:
 
> I restarted Postgresql with new settings and got no performance
> improvements in this particular query...
 
The cost settings help the optimizer make good decisions about plan
choice.  I guess I don't have much reason to believe, at this point,
that there is a better plan for it to choose for this query.  Do you
think you see one?  What would that be?  (We might be able to force
that plan and find out if you're right, which can be a valuable
diagnostic step, even if the way it gets forced isn't a
production-quality solution.)
 
Are you able to share the table descriptions?  (That might help us
suggest an index or some such which might help.)
 
> Do you have ideas how much random_page_cost and seq_page_cost
> should be decreased?
 
It really depends on how much of your active data set is cached.  If
it is effectively fully cached, you might want to go to 0.01 for
both (or even lower).  Many of our databases perform best with
seq_page_cost = 1 and random_page_cost = 2.  With some, either of
those "extremes" causes some queries to optimize poorly, and we've
had luck with 0.3 and 0.5.  This is one worth testing with your
workload, because you can make some queries faster at the expense of
others; sometimes it comes down to which needs better response time
to keep your users happy.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How to diagnose a *con text-switching * storm problem ?

2010-04-12 Thread Kevin Grittner
RD黄永卫 wrote: 
 
> My database server get sluggish suddenly
 
> [vmstat output showing over 200,000 context switches per second]
 
>My postgres version: 8.1.3;
 
Upgrading should help.  Later releases are less vulnerable to this.
 
> Could you please give me some advice ?
 
A connection pooler can often help.  (e.g., pgpool or pgbouncer)
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: *** PROBABLY SPAM *** [PERFORM] Does the psql executable support a "fetch many" approach when dumping large queries to stdout?

2010-04-12 Thread Bruce Momjian
Pierre C wrote:
> > Does the psql executable have any ability to do a "fetch many", using a  
> > server-side named cursor, when returning results? It seems like it tries  
> > to retrieve the query entirely to local memory before printing to  
> > standard out.
> 
> I think it grabs the whole result set to calculate the display column  
> widths. I think there is an option to tweak this but don't remember which,  
> have a look at the psql commands (\?), formatting section.

See the FETCH_COUNT variable mentioned in the psql manual page.

-- 
  Bruce Momjian  http://momjian.us
  EnterpriseDB http://enterprisedb.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance