[PERFORM] Benchmarking postgres on Solaris/Linux

2004-03-22 Thread Subbiah, Stalin
As anyone done benchmarking tests with postgres running on solaris and linux
(redhat) assuming both environment has similar hardware, memory, processing
speed etc. By reading few posts here, i can see linux would outperform
solaris cause linux being very good at kernel caching than solaris which is
being the key performance booster for postgres.  what is the preferred OS
for postgres deployment if given an option between linux and solaris. As
well as filesystem to be used (xfs, ufs, ext3...). Any pointer to source of
information is appreciated.

Thanks,
Stalin

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux

2004-03-23 Thread Subbiah, Stalin
We are looking into Sun V210 (2 x 1 GHz cpu, 2 gig ram, 5.8Os) vs. Dell 1750
(2 x 2.4 GHz xeon, 2 gig ram, RH3.0). database will mostly be
write intensive and disks will be on raid 10. Wondering if 64bit 1 GHz to
32bit 2.4 GHz make a big difference here. 

Thanks!

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] Behalf Of Andrew
Sullivan
Sent: Tuesday, March 23, 2004 9:37 AM
To: '[EMAIL PROTECTED]'
Subject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux


On Mon, Mar 22, 2004 at 04:05:45PM -0800, Subbiah, Stalin wrote:
 being the key performance booster for postgres.  what is the preferred OS
 for postgres deployment if given an option between linux and solaris. As

One thing this very much depends on is what you're trying to do. 
Suns have a reputation for greater reliability.  While my own
experience with Sun hardware has been rather shy of sterling, I _can_
say that it stands head and shoulders above a lot of the x86 gear you
can get.

If you're planning to use Solaris on x86, don't bother.  Solaris is a
slow, bloated pig compared to Linux, at least when it comes to
managing the largish number of processes that Postgres requires.

If pure speed is what you're after, I have found that 2-way, 32 bit
Linux on P-IIIs compares very favourably to 4 way 64 bit Ultra SPARC
IIs.

A

-- 
Andrew Sullivan  | [EMAIL PROTECTED]
The fact that technology doesn't work is no bar to success in the
marketplace.
--Philip Greenspun

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux

2004-03-23 Thread Subbiah, Stalin
As anyone done performance benchmark testing with solaris sparc/intel linux.
I once read a post here, which had benchmarking test results for using
different filesystem like xfs, ext3, ext2, ufs etc. i couldn't find that
link anymore and google is failing on me, so anyone have the link handy.

Thanks!

-Original Message-
From: Josh Berkus [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 23, 2004 12:13 PM
To: Matt Clark; Subbiah, Stalin; 'Andrew Sullivan';
[EMAIL PROTECTED]
Subject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux


Matt, Stalin,

 As for the compute intensive side (complex joins  sorts etc), the Dell
will 
most likely beat the Sun by some distance, although
 what the Sun lacks in CPU power it may make up a bit in memory bandwidth/
latency.

Personally, I've been unimpressed by Dell/Xeon; I think the Sun might do 
better than you think, comparitively.On all the Dell servers I've used
so 
far, I've not seen performance that comes even close to the hardware specs.

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux

2004-03-23 Thread Subbiah, Stalin
Yep. Thanks Bill.

-Original Message-
From: Bill Moran [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 23, 2004 2:10 PM
To: Subbiah, Stalin
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux


Subbiah, Stalin wrote:
 As anyone done performance benchmark testing with solaris sparc/intel
linux.
 I once read a post here, which had benchmarking test results for using
 different filesystem like xfs, ext3, ext2, ufs etc. i couldn't find that
 link anymore and google is failing on me, so anyone have the link handy.

If you're talking about the work I did, it's here:
http://www.potentialtech.com/wmoran/ (then follow the link)

Anyway, that should be easily portable to any platform that will run
Postgres,
but I don't know how useful it is in comparing two different platforms.  See
the information in the document.  It was intended only to test disk access
speed,
and attempts to flood the HDD system with database work to do.

 
 Thanks!
 
 -Original Message-
 From: Josh Berkus [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, March 23, 2004 12:13 PM
 To: Matt Clark; Subbiah, Stalin; 'Andrew Sullivan';
 [EMAIL PROTECTED]
 Subject: Re: [PERFORM] [ADMIN] Benchmarking postgres on Solaris/Linux
 
 
 Matt, Stalin,
 
 
As for the compute intensive side (complex joins  sorts etc), the Dell
 
 will 
 most likely beat the Sun by some distance, although
 
what the Sun lacks in CPU power it may make up a bit in memory bandwidth/
 
 latency.
 
 Personally, I've been unimpressed by Dell/Xeon; I think the Sun might do 
 better than you think, comparitively.On all the Dell servers I've used
 so 
 far, I've not seen performance that comes even close to the hardware
specs.
 


-- 
Bill Moran
Potential Technologies
http://www.potentialtech.com

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] [ADMIN] Databases Vs. Schemas

2004-03-23 Thread Subbiah, Stalin
And we also created rules to allow update, delete, and insert on those 
views so that they looked like tables.  The reason we did this is 
because we ran into issues with too many open files during pg_dump when 
we had thousands of tables instead of about 1 hundred tables and 
thousands of views.

Is it because you had smaller value set for max. allowable number of open
files descriptor. what was ulimit -a set to ?

We, however, did have a need to periodically select data from 2 schemas 
at a time, and it was simpler logic than if we needed 2 database 
connections.

Adam Ruth

On Mar 22, 2004, at 2:30 PM, Subbiah, Stalin wrote:

 --sorry to repost, just subscribed to the list. hopefully it gets to 
 the
 list this time --

 Hi All,

 We are evaluating the options for having multiple databases vs. 
 schemas on a
 single database cluster for a custom grown app that we developed. Each 
 app
 installs same set of tables for each service. And the service could 
 easily
 be in thousands. so Is it better to have 1000 databases vs 1000 
 schemas in a
 database cluster. What are the performance overhead of having multiple
 databases vs. schemas (if any). I'm leaning towards having schemas 
 rather
 than databases but i would like to get others opinion on this. 
 Appreciate
 your reply.

 Thanks,
 Stalin

 ---(end of 
 broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


[PERFORM] Databases Vs. Schemas

2004-03-26 Thread Subbiah, Stalin
Hi All,

We are evaluating the options for having multiple databases vs. schemas on a
single database cluster for a custom grown app that we developed. Each app
installs same set of tables for each service. And the service could easily
be in thousands. so Is it better to have 1000 databases vs 1000 schemas in a
database cluster. What are the performance overhead of having multiple
databases vs. schemas (if any). I'm leaning towards having schemas rather
than databases but i would like to get others opinion on this. Appreciate
your reply.

Thanks,
Stalin




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


[PERFORM] Query tuning

2006-08-22 Thread Subbiah, Stalin
Hello All,

This query runs forever and ever. Nature of this table being lots of
inserts/deletes/query, I vacuum it every half hour to keep the holes
reusable and nightly once vacuum analyze to update the optimizer. We've
got index on eventtime only. Running it for current day uses index range
scan and it runs within acceptable time. Below is the explain of the
query. Is the order by sequencenum desc prevents from applying limit
optimization?

explain SELECT *
FROM EVENTLOG 
WHERE EVENTTIME'07/23/06 16:00:00' 
AND  EVENTTIME'08/22/06 16:00:00' 
AND  (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' 
OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' 
OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA') 
ORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 0;
 
QUERY PLAN





-
 Limit  (cost=15546930.29..15546931.54 rows=500 width=327)
   -  Sort  (cost=15546930.29..15581924.84 rows=13997819 width=327)
 Sort Key: eventtime, sequencenum
 -  Seq Scan on eventlog  (cost=0.00..2332700.25 rows=13997819
width=327)
   Filter: ((eventtime  '2006-07-23 16:00:00'::timestamp
without time zone) AND (eventtime  '2006-08-22 16:00:00'::timestamp
without time zone) AND (((objdomainid)::text =
'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =
'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =
'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))
(5 rows)

Thanks,
Stalin
Pg version 8.0.1, suse 64bit.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Query tuning

2006-08-22 Thread Subbiah, Stalin
Actually these servers will be upgraded to 8.1.4 in couple of months.

Here you go with explain analyze.

# explain analyze SELECT *
FROM EVENTLOG 
WHERE EVENTTIME'07/23/06 16:00:00' AND  EVENTTIME'08/22/06 16:00:00' 
AND  (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' 
OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' 
OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA') 
ORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 500;
 
QUERY PLAN





-
 Limit  (cost=15583110.14..15583111.39 rows=500 width=327) (actual
time=427771.568..427772.904 rows=500 loops=1)
   -  Sort  (cost=15583108.89..15618188.88 rows=14031998 width=327)
(actual time=427770.504..427771.894 rows=1000 loops=1)
 Sort Key: eventtime, sequencenum
 -  Seq Scan on eventlog  (cost=0.00..2334535.17 rows=14031998
width=327) (actual time=10.370..190038.764 rows=7699388 loops=1)
   Filter: ((eventtime  '2006-07-23 16:00:00'::timestamp
without time zone) AND (eventtime  '2006-08-22 16:00:00'::timestamp
without time zone) AND (((objdomainid)::text =
'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =
'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =
'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))
 Total runtime: 437884.134 ms
(6 rows)

-Original Message-
From: Chris [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 22, 2006 6:37 PM
To: Subbiah, Stalin
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Query tuning

Subbiah, Stalin wrote:
 Hello All,
 
 This query runs forever and ever. Nature of this table being lots of 
 inserts/deletes/query, I vacuum it every half hour to keep the holes 
 reusable and nightly once vacuum analyze to update the optimizer. 
 We've got index on eventtime only. Running it for current day uses 
 index range scan and it runs within acceptable time. Below is the 
 explain of the query. Is the order by sequencenum desc prevents from 
 applying limit optimization?
 
 explain SELECT *
 FROM EVENTLOG
 WHERE EVENTTIME'07/23/06 16:00:00' 
 AND  EVENTTIME'08/22/06 16:00:00' 
 AND  (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' 
 OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' 
 OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA')
 ORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 0;
  
 QUERY PLAN
 
 --
 --
 --
 --
 --
 --
 --
 --
 -
  Limit  (cost=15546930.29..15546931.54 rows=500 width=327)
-  Sort  (cost=15546930.29..15581924.84 rows=13997819 width=327)
  Sort Key: eventtime, sequencenum
  -  Seq Scan on eventlog  (cost=0.00..2332700.25 
 rows=13997819
 width=327)
Filter: ((eventtime  '2006-07-23 16:00:00'::timestamp 
 without time zone) AND (eventtime  '2006-08-22 16:00:00'::timestamp 
 without time zone) AND (((objdomainid)::text =
 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =
 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =
 'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))
 (5 rows)
 
 Thanks,
 Stalin
 Pg version 8.0.1, suse 64bit.

Firstly you should update to 8.0.8 - because it's in the same stream you
won't need to do a dump/initdb/reload like a major version change, it
should be a simple upgrade.

Can you send explain analyze instead of just explain?

It sounds like you're not analyz'ing enough - if you're doing lots of
updates/deletes/inserts then the statistics postgresql uses to choose
whether to do an index scan or something else will quickly be outdated
and so it'll have to go back to a full table scan every time..

Can you set up autovacuum to handle that for you more regularly?

--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Query tuning

2006-08-23 Thread Subbiah, Stalin
I get the same plan after running vacuum analyze. Nope, I don't have
index on objdomainid, objid and userdomainid. Only eventime has it.

-Original Message-
From: Chris [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, August 22, 2006 8:06 PM
To: Subbiah, Stalin
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Query tuning

Subbiah, Stalin wrote:
 Actually these servers will be upgraded to 8.1.4 in couple of months.

even so, you could get some bad data in there.
http://www.postgresql.org/docs/8.0/static/release.html . Go through the
old release notes and you'll find various race conditions, crashes etc.

 Here you go with explain analyze.
 
 # explain analyze SELECT *
 FROM EVENTLOG
 WHERE EVENTTIME'07/23/06 16:00:00' AND  EVENTTIME'08/22/06 16:00:00'

 AND  (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' 
 OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' 
 OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA')
 ORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 500;
  
 QUERY PLAN
 
 --
 --
 --
 --
 --
 --
 --
 --
 -
  Limit  (cost=15583110.14..15583111.39 rows=500 width=327) (actual
 time=427771.568..427772.904 rows=500 loops=1)
-  Sort  (cost=15583108.89..15618188.88 rows=14031998 width=327) 
 (actual time=427770.504..427771.894 rows=1000 loops=1)
  Sort Key: eventtime, sequencenum
  -  Seq Scan on eventlog  (cost=0.00..2334535.17 
 rows=14031998
 width=327) (actual time=10.370..190038.764 rows=7699388 loops=1)
Filter: ((eventtime  '2006-07-23 16:00:00'::timestamp 
 without time zone) AND (eventtime  '2006-08-22 16:00:00'::timestamp 
 without time zone) AND (((objdomainid)::text =
 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =
 'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =
 'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))
  Total runtime: 437884.134 ms
 (6 rows)

If you analyze the table then run this again what plan does it come back
with?

I can't read explain output properly but I suspect (and I'm sure I'll be
corrected if need be) that the sort step is way out of whack and so is
the seq scan because the stats aren't up to date enough.

Do you have an index on objdomainid, objid and userdomainid (one index
per field) ? I wonder if that will help much.

--
Postgresql  php tutorials
http://www.designmagick.com/

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Query tuning

2006-08-23 Thread Subbiah, Stalin
Changing limit or offset to a small number doesn't have any change in
plans. Likewise enable_seqscan to false. They still take 8-10 mins to
runs. 

-Original Message-
From: Dave Dutcher [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, August 23, 2006 4:20 PM
To: Subbiah, Stalin
Cc: pgsql-performance@postgresql.org
Subject: RE: [PERFORM] Query tuning

It seems to me that what would work best is an index scan backward on
the eventtime index.  I don't see why that wouldn't work for you, maybe
the planner is just esitmating the seq scan and sort is faster for some
reason.
What does EXPLAIN say if you use a small limit and offset like 10?  Or
what does EXPLAIN say if you first run set enable_seqscan=false;  (If
you get the same plan, then I wouldn't bother running EXPLAIN ANALYZE,
but if you get a different plan I would run EXPLAIN ANALYZE to see if
the new plan is any faster.)



 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Subbiah, 
 Stalin
 Sent: Wednesday, August 23, 2006 1:03 PM
 To: Chris
 Cc: pgsql-performance@postgresql.org
 Subject: Re: [PERFORM] Query tuning
 
 
 I get the same plan after running vacuum analyze. Nope, I don't have 
 index on objdomainid, objid and userdomainid. Only eventime has it.
 
 -Original Message-
 From: Chris [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, August 22, 2006 8:06 PM
 To: Subbiah, Stalin
 Cc: pgsql-performance@postgresql.org
 Subject: Re: [PERFORM] Query tuning
 
 Subbiah, Stalin wrote:
  Actually these servers will be upgraded to 8.1.4 in couple
 of months.
 
 even so, you could get some bad data in there.
 http://www.postgresql.org/docs/8.0/static/release.html . Go through 
 the old release notes and you'll find various race conditions, crashes

 etc.
 
  Here you go with explain analyze.
  
  # explain analyze SELECT *
  FROM EVENTLOG
  WHERE EVENTTIME'07/23/06 16:00:00' AND
 EVENTTIME'08/22/06 16:00:00'
 
  AND  (OBJDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA' 
  OR OBJID='tzRh39d0d91luNGT1weIUjLvFIcA' 
  OR USERDOMAINID='tzRh39d0d91luNGT1weIUjLvFIcA')
  ORDER BY EVENTTIME DESC, SEQUENCENUM DESC LIMIT 500 OFFSET 500;
   
  QUERY PLAN
  
  
 --
  --
  
 --
  --
  
 --
  --
  
 --
  --
  -
   Limit  (cost=15583110.14..15583111.39 rows=500 width=327) (actual
  time=427771.568..427772.904 rows=500 loops=1)
 -  Sort  (cost=15583108.89..15618188.88 rows=14031998
 width=327)
  (actual time=427770.504..427771.894 rows=1000 loops=1)
   Sort Key: eventtime, sequencenum
   -  Seq Scan on eventlog  (cost=0.00..2334535.17
  rows=14031998
  width=327) (actual time=10.370..190038.764 rows=7699388 loops=1)
 Filter: ((eventtime  '2006-07-23
 16:00:00'::timestamp
  without time zone) AND (eventtime  '2006-08-22
 16:00:00'::timestamp
  without time zone) AND (((objdomainid)::text =
  'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((objid)::text =
  'tzRh39d0d91luNGT1weIUjLvFIcA'::text) OR ((userdomainid)::text =
  'tzRh39d0d91luNGT1weIUjLvFIcA'::text)))
   Total runtime: 437884.134 ms
  (6 rows)
 
 If you analyze the table then run this again what plan does it come 
 back with?
 
 I can't read explain output properly but I suspect (and I'm sure I'll 
 be corrected if need be) that the sort step is way out of whack and so

 is the seq scan because the stats aren't up to date enough.
 
 Do you have an index on objdomainid, objid and userdomainid (one index

 per field) ? I wonder if that will help much.
 
 --
 Postgresql  php tutorials
 http://www.designmagick.com/
 
 ---(end of
 broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PERFORM] RESEND:Postgres with Sun Cluster HA/Solaris 10

2007-09-05 Thread Subbiah Stalin-XCGF84
Hello All,

We have a postgres setup on solaris 10 with sun cluster for HA purposes.
2 nodes are configured in the cluster in active-passive mode with
pg_data stored on external storage. Everything is working as expected
however, when we either switch the resource group from one node to other
or rg restart on primary node, the apps fails with An I/O error
occurred while sending to the backend. and doesn't recover back from db
failover. All queries to the db give the above error after resource
group restart.

Our app uses resin container db pooling, with following HA parameters
set. With same settings, the app recovers just fine with database
configured in non-cluster mode i.e. no sun cluster setup etc.

database
jndi-namejdbc/nbbsDB/jndi-name
driver type=org.postgresql.Driver
urljdbc:postgresql://db-vip:5432/appdbname/url
userappusr/user
passwordapppass/password
/driver
max-connections100/max-connections
max-idle-time5m/max-idle-time
max-active-time6h/max-active-time
max-pool-time24h/max-pool-time
connection-wait-time30s/connection-wait-time
max-overflow-connections0/max-overflow-connections
ping-tablepingtable/ping-table
pingtrue/ping
ping-interval60s/ping-interval
prepared-statement-cache-size10/prepared-statement-cache-size
spyfalse/spy
/database 

Any pointers to debug this futher is greatly appreciated. We are running
postgres 8.2.4.

Other thing i noticed in pg_log/server.logs is that whenever i restart
postgres i get below error, when there is no other postgres running on
5432.

LOG:  could not bind IPv6 socket: Cannot assign requested address
HINT:  Is another postmaster already running on port 5432? If not, wait
a few seconds and retry.

Thanks,
Stalin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[PERFORM] Update performance degrades over time

2008-05-14 Thread Subbiah Stalin-XCGF84
Hi All,
 
We are doing some load tests with our application running postgres
8.2.4. At times we see updates on a table taking longer (around
11-16secs) than expected sub-second response time. The table in question
is getting updated constantly through the load tests. In checking the
table size including indexes, they seem to be bloated got it confirmed
after recreating it (stats below). We have autovacuum enabled with
default parameters. I thought autovaccum would avoid bloating issues but
looks like its not aggressive enough. Wondering if table/index bloating
is causing update slowness in over a period of time. Any ideas how to
troubleshoot this further.
 
No IO waits seen during load tests and cpu usage on the server seem to
be 85% idle. This is a v445 sol10 with 4 cpu box attached to SAN
storage.
 
Here is the update statement and table/index/instance stats.
 
shared_buffers=4000MB
max_fsm_pages = 2048000
maintenance_work_mem = 512MB
checkpoint_segments = 128 
effective_cache_size = 4000MB
 
update tablexy set col2=$1,col9=$2, col10=$3,col11=$4,col3=$5 WHERE
ID=$6;
 
Bloated
relname| relowner | relpages | reltuples 
 --+--+--+---
  tablexy  |   10 |   207423 |502627
  ix_tablexy_col1_col2 |   10 |38043 |502627
  ix_tablexy_col3  |   10 |13944 |502627
  ix_tablexy_col4  |   10 |17841 |502627
  ix_tablexy_col5  |   10 |19669 |502627
  ix_tablexy_col6  |   10 | 3865 |502627
  ix_tablexy_col7  |   10 |12359 |502627
  ix_tablexy_col8_col7 |   10 |26965 |502627
  ct_tablexy_id_u1 |   10 | 6090 |502627
 
Recreating tablexy (compact),
 
   relname| relowner | relpages | reltuples 
--+--+--+---
 tablexy  |   10 |41777 |501233
 ix_tablexy_col3  |   10 | 2137 |501233
 ix_tablexy_col8_col7 |   10 | 4157 |501233
 ix_tablexy_col6  |   10 | 1932 |501233
 ix_tablexy_col7  |   10 | 1935 |501233
 ix_tablexy_col1_col2 |   10 | 1933 |501233
 ix_tablexy_col5  |   10 | 2415 |501233
 ix_tablexy_col6  |   10 | 1377 |501233
 ct_tablexy_id_u1 |   10 | 3046 |501233
 
Thanks,
Stalin


Re: [PERFORM] Update performance degrades over time

2008-05-15 Thread Subbiah Stalin-XCGF84
Yes we are updating one of indexed timestamp columns which gets unique
value on every update. We tried setting autovacuum_vacuum_scale_factor =
0.1 from default to make autovacuum bit aggressive, we see bloating on
both table and it's indexes but it's creeping up slowly though. 

Anyways, even with slower bloating, I still see update performance to
degrade with 15 sec response time captured by setting
log_min_duration_stmt. Looks like bloating isn't causing slower updates.
Any help/ideas to tune this is appreciated.

Explain plan seems reasonable for the update statement.

update tablexy set col2=$1,col9=$2, col10=$3,col11=$4,col3=$5 WHERE
ID=$6;

 QUERY PLAN



 Index Scan using ct_tablexy_id_u1 on tablexy  (cost=0.00..8.51 rows=1
width=194) (actual time=0.162..0.166 rows=1 loops=1)
   Index Cond: ((id)::text = '32xka8axki8'::text)

Thanks in advance.

Stalin

-Original Message-
From: Jeffrey Baker [mailto:[EMAIL PROTECTED] 
Sent: Thursday, May 15, 2008 6:56 AM
To: Subbiah Stalin-XCGF84; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Update performance degrades over time

On Wed, May 14, 2008 at 6:31 PM, Subbiah Stalin-XCGF84
[EMAIL PROTECTED] wrote:
 Hi All,

 We are doing some load tests with our application running postgres 
 8.2.4. At times we see updates on a table taking longer (around
 11-16secs) than expected sub-second response time. The table in 
 question is getting updated constantly through the load tests. In 
 checking the table size including indexes, they seem to be bloated got

 it confirmed after recreating it (stats below). We have autovacuum 
 enabled with default parameters. I thought autovaccum would avoid 
 bloating issues but looks like its not aggressive enough. Wondering if

 table/index bloating is causing update slowness in over a period of 
 time. Any ideas how to troubleshoot this further.

Sometimes it is necessary to not only VACUUM, but also REINDEX.  If your
update changes an indexed column to a new, distinct value, you can
easily get index bloat.

Also, you should check to see if you have any old, open transactions on
the same instance.  If you do, it's possible that VACUUM will have no
beneficial effect.

-jwb

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Update performance degrades over time

2008-05-15 Thread Subbiah Stalin-XCGF84
Any system catalog views I can check for wait events causing slower
response times.

Thanks in advance.

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Subbiah
Stalin
Sent: Thursday, May 15, 2008 9:28 AM
To: Jeffrey Baker; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Update performance degrades over time

Yes we are updating one of indexed timestamp columns which gets unique
value on every update. We tried setting autovacuum_vacuum_scale_factor =
0.1 from default to make autovacuum bit aggressive, we see bloating on
both table and it's indexes but it's creeping up slowly though. 

Anyways, even with slower bloating, I still see update performance to
degrade with 15 sec response time captured by setting
log_min_duration_stmt. Looks like bloating isn't causing slower updates.
Any help/ideas to tune this is appreciated.

Explain plan seems reasonable for the update statement.

update tablexy set col2=$1,col9=$2, col10=$3,col11=$4,col3=$5 WHERE
ID=$6;

 QUERY PLAN



 Index Scan using ct_tablexy_id_u1 on tablexy  (cost=0.00..8.51 rows=1
width=194) (actual time=0.162..0.166 rows=1 loops=1)
   Index Cond: ((id)::text = '32xka8axki8'::text)

Thanks in advance.

Stalin

-Original Message-
From: Jeffrey Baker [mailto:[EMAIL PROTECTED]
Sent: Thursday, May 15, 2008 6:56 AM
To: Subbiah Stalin-XCGF84; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Update performance degrades over time

On Wed, May 14, 2008 at 6:31 PM, Subbiah Stalin-XCGF84
[EMAIL PROTECTED] wrote:
 Hi All,

 We are doing some load tests with our application running postgres 
 8.2.4. At times we see updates on a table taking longer (around
 11-16secs) than expected sub-second response time. The table in 
 question is getting updated constantly through the load tests. In 
 checking the table size including indexes, they seem to be bloated got

 it confirmed after recreating it (stats below). We have autovacuum 
 enabled with default parameters. I thought autovaccum would avoid 
 bloating issues but looks like its not aggressive enough. Wondering if

 table/index bloating is causing update slowness in over a period of 
 time. Any ideas how to troubleshoot this further.

Sometimes it is necessary to not only VACUUM, but also REINDEX.  If your
update changes an indexed column to a new, distinct value, you can
easily get index bloat.

Also, you should check to see if you have any old, open transactions on
the same instance.  If you do, it's possible that VACUUM will have no
beneficial effect.

-jwb

--
Sent via pgsql-performance mailing list
(pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

__
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email
__

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Sort performance

2009-01-29 Thread Subbiah Stalin-XCGF84
Hi All,
 
I'm in the process of tuning a query that does a sort on a huge dataset.
With work_mem set to 2M, i see the sort operation spilling to disk
writing upto 430MB and then return the first 500 rows. Our query is of
the sort
 
select co1, col2... from table where col1 like 'aa%' order col1 limit
500; It took 561Secs to complete. Looking at the execution plan 95% of
the time is spent on sort vs seq scan on the table.
 
Now if set the work_mem to 500MB (i did this in a psql session without
making it global) and ran the same query. One would think the sort
operations would happen in memory and not spill to disk but i still see
430MB written to disk however, the query complete time dropped down to
351Secs. So work_mem did have an impact but wondering why its still
writing to disk when it can all do it memory.
 
I appreciate if anyone can shed some light on this.
 
Thanks,
Stalin
 
Env: Sol 10, Pg 827 64bit.


Re: [PERFORM] Sort performance

2009-01-29 Thread Subbiah Stalin-XCGF84
Here you go.

 Limit  (cost=502843.44..502844.69 rows=501 width=618) (actual
time=561397.940..561429.242 rows=501 loops=1)
   -  Sort  (cost=502843.44..503923.48 rows=432014 width=618) (actual
time=561397.934..561429.062 rows=501 loops=1)
 Sort Key: name
 -  Seq Scan on objects  (cost=0.00..99157.88 rows=432014
width=618) (actual time=0.172..22267.727 rows=64 loops=1)
   Filter: (((domainid)::text = ANY
(('{111,SmWCGiRp}'::character varying[])::text[])) AND ((type)::text =
'cpe'::text) AND (upper((name)::text) ~~ 'CPE1%'::text) AND
(upper((name)::text) = 'CPE1'::text) AND (upper((name)::text) 
'CPE2'::text))
 Total runtime: 561429.915 ms
(6 rows)

I haven't tried setting that high number. I came up with 500M by
monitoring pgsql_tmp when sort operations were performed. It never went
beyond 450M. Once it reaches 450M it spends some cycles before I see the
output. I guess some sort of merge operation happens to get the first
500 records out.

Thanks,
Stalin 

-Original Message-
From: Robert Haas [mailto:robertmh...@gmail.com] 
Sent: Thursday, January 29, 2009 3:21 PM
To: Subbiah Stalin-XCGF84
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Sort performance

On Thu, Jan 29, 2009 at 3:15 PM, Subbiah Stalin-XCGF84
ssubb...@motorola.com wrote:
 I'm in the process of tuning a query that does a sort on a huge
dataset.
 With work_mem set to 2M, i see the sort operation spilling to disk 
 writing upto 430MB and then return the first 500 rows. Our query is of

 the sort

 select co1, col2... from table where col1 like 'aa%' order col1 limit 
 500; It took 561Secs to complete. Looking at the execution plan 95% of

 the time is spent on sort vs seq scan on the table.

 Now if set the work_mem to 500MB (i did this in a psql session without

 making it global) and ran the same query. One would think the sort 
 operations would happen in memory and not spill to disk but i still 
 see 430MB written to disk however, the query complete time dropped 
 down to 351Secs. So work_mem did have an impact but wondering why its 
 still writing to disk when it can all do it memory.

 I appreciate if anyone can shed some light on this.

Can you send the EXPLAIN ANALYZE output?

What happens if you set work_mem to something REALLY big, like 5GB?

...Robert

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Sort performance

2009-01-29 Thread Subbiah Stalin-XCGF84
Thanks Greg. You were right. If I set my sort_mem to 1G (yes I have
loads of memory, only for testing purpose), then I don't see any thing
written to disk. So in-memory require more memory than reported on-disk
storage.

Stalin  

-Original Message-
From: Greg Stark [mailto:greg.st...@enterprisedb.com] On Behalf Of
Gregory Stark
Sent: Thursday, January 29, 2009 3:36 PM
To: Robert Haas
Cc: Subbiah Stalin-XCGF84; pgsql-performance@postgresql.org
Subject: Re: Sort performance

Robert Haas robertmh...@gmail.com writes:

 On Thu, Jan 29, 2009 at 3:15 PM, Subbiah Stalin-XCGF84 
 ssubb...@motorola.com wrote:

 i see the sort operation spilling to disk writing upto 430MB and then

 return the first 500 rows. Our query is of the sort

 Now if set the work_mem to 500MB (i did this in a psql session 
 without making it global) and ran the same query. One would think the

 sort operations would happen in memory and not spill to disk but i 
 still see 430MB written to disk however, the query complete time 
 dropped down to 351Secs. So work_mem did have an impact but wondering

 why its still writing to disk when it can all do it memory.

The on-disk storage is more compact than the in-memory storage so you
actually need a larger value than the space reported for on-disk storage
to avoid the disk sort entirely. The accounting also isn't perfect; the
on-disk sort still uses some ram, for example.

 What happens if you set work_mem to something REALLY big, like 5GB?

Don't set it larger than the available RAM though -- or you'll quite
possibly get an out-of-error error.


--
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com
  Ask me about EnterpriseDB's PostGIS support!

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Query help

2009-08-03 Thread Subbiah Stalin-XCGF84
All,
 
Not sure what's wrong in below execution plan but at times the query
runs for 5 minutes to complete and after a while it runs within a second
or two.
 
Here is explain analyze out of the query.
 
SELECT
OBJECTS.ID,OBJECTS.NAME,OBJECTS.TYPE,OBJECTS.STATUS,OBJECTS.ALTNAME,OBJE
CTS.DOMAINID,OBJECTS.ASSIGNEDTO,OBJECTS.USER1,OBJECTS.USER2,
OBJECTS.KEY1,OBJECTS.KEY2,OBJECTS.KEY3,OBJECTS.OUI,OBJECTS.PRODCLASS,OBJ
ECTS.STATUS2,OBJECTS.LASTMODIFIED,OBJECTS.LONGDATA,OBJECTS.DATA0,
OBJECTS.DATA1 
FROM OBJECTS 
WHERE OBJECTS.DOMAINID IN
('HY3XGEzC0E9JxRwoXLOLbjNsghEA','3330') 
AND OBJECTS.TYPE  IN ('cpe') 
ORDER BY OBJECTS.LASTMODIFIED DESC LIMIT 501
 
 
QUERY PLAN



-
 Limit  (cost=0.00..9235.11 rows=501 width=912) (actual
time=0.396..2741.803 rows=501 loops=1)
   -  Index Scan Backward using ix_objects_type_lastmodified on objects
(cost=0.00..428372.71 rows=23239 width=912) (actual time=0.394..2741.608
rows=501 loops=1)
 Index Cond: ((type)::text = 'cpe'::text)
 Filter: ((domainid)::text = ANY
(('{HY3XGEzC0E9JxRwoXLOLbjNsghEA,3330}'::charact
er varying[])::text[]))
 Total runtime: 2742.126 ms

 
The table is auto vaccumed regularly. I have enabled log_min_messages to
debug2 but nothing stands out during the times when the query took 5+
minutes. Is rebuild of the index necessary here.
 
Thanks in Advance,
 
Stalin
 
Pg 8.2.7, Sol10.
 
 
 


Re: [PERFORM] Query help

2009-08-03 Thread Subbiah Stalin-XCGF84
Sure I can provide those details. I have seen this query running 5+
minutes for different values for doaminID too. Its just that it happens
at random and gets fixed within few mins.

Shared buffer=8G, effective cache size=4G. Optimizer/autovaccum settings
are defaults

   relname| relpages | reltuples
--+--+---
 ct_objects_id_u1 |11906 |671919
 ix_objects_altname   |13327 |671919
 ix_objects_domainid_name |24714 |671919
 ix_objects_key3  | 9891 |671919
 ix_objects_name  |11807 |671919
 ix_objects_type_lastmodified |38640 |671919
 ix_objects_user1 |20796 |671919
 ix_objects_user2 |20842 |671919
 objects  |   111873 |671919

This database resides on a RAID 1+0 storage with 10 disks (5+5).

Let me know if you need any other information.

Thanks Kevin.

Stalin

-Original Message-
From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov] 
Sent: Monday, August 03, 2009 12:48 PM
To: Subbiah Stalin-XCGF84; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Query help

Subbiah Stalin-XCGF84 ssubb...@motorola.com wrote: 
 
 Not sure what's wrong in below execution plan but at times the query 
 runs for 5 minutes to complete and after a while it runs within a 
 second or two.
 
The plan doesn't look entirely unreasonable for the given query,
although it's hard to be sure of that without seeing the table
definitions.  Given the plan, the times look to be about what I'd expect
for uncached and cached timings.  (That is, on subsequent runs, the data
is sitting in RAM, so you don't need to access the hard
drives.)
 
If the initial run time is unacceptable for your environment, and
there's no way to have the cached primed when it matters, please give
more details on your table layouts, and perhaps someone can make a
useful suggestion.
 
 Pg 8.2.7, Sol10.
 
One quick suggestion -- upgrade your PostgreSQL version if at all
possible.  The latest bug-fix version of 8.2 is currently 8.2.13, and
there are significant performance improvements in 8.3 and the newly
released 8.4.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query help

2009-08-03 Thread Subbiah Stalin-XCGF84
Server has 32G memory and it's a dedicated to run PG and no other
application is sharing this database. I have checked checkpoints and
they don't occur during those slow query runtimes. Checkpoint_segments
is set 128. here is quick snap from vmstat.

 # vmstat 5 5
 kthr  memorypagedisk  faults
cpu
 r b w   swap  free  re  mf pi po fr de sr 1m 1m 1m m1   in   sy   cs us
sy id
 0 0 0 56466032 25908072 59 94 516 13 13 0 0 10 3 59 1  480  443  500  1
1 98
 0 0 0 51377520 20294328 6 8 0 32 32  0  0  0  4  1  0  368  185  361  0
1 99
 0 0 0 56466032 25908064 59 94 516 13 13 0 0 1 10 3 59  480  443  500  1
1 98
 0 0 0 51376984 20294168 57 427 0 16 16 0 0 0  0  1  0  380  781  396  1
1 98
 0 0 0 51376792 20294208 112 1131 2 50 50 0 0 0 0 5  2  398 2210  541  4
3 92 

\d output --

 Table public.objects
Column|Type | Modifiers
--+-+---
 id   | character varying(28)   | not null
 name | character varying(50)   | not null
 altname  | character varying(50)   |
 type | character varying(3)|
 domainid | character varying(28)   | not null
 status   | smallint|
 dbver| integer |
 created  | timestamp without time zone |
 lastmodified | timestamp without time zone |
 assignedto   | character varying(28)   |
 status2  | smallint|
 key1 | character varying(25)   |
 key2 | character varying(25)   |
 key3 | character varying(64)   |
 oui  | character varying(6)|
 prodclass| character varying(64)   |
 user1| character varying(50)   |
 user2| character varying(50)   |
 data0| character varying(2000) |
 data1| character varying(2000) |
 longdata | character varying(1)|
Indexes:
ct_objects_id_u1 PRIMARY KEY, btree (id), tablespace
nbbs_index_data
ix_objects_altname btree (altname), tablespace nbbs_index_data
ix_objects_domainid_name btree (domainid, upper(name::text)),
tablespace nbbs_index_data
ix_objects_key3 btree (upper(key3::text)), tablespace
nbbs_index_data
ix_objects_name btree (upper(name::text) varchar_pattern_ops),
tablespace nbbs_index_data
ix_objects_type_lastmodified btree (type, lastmodified),
tablespace nbbs_index_data
ix_objects_user1 btree (upper(user1::text)), tablespace
nbbs_index_data
ix_objects_user2 btree (upper(user2::text)), tablespace
nbbs_index_data

Work_mem=64mb, r_p_c = 2 on the session gave similar execution plan
except the cost different due to change r_p_c.

   QUERY
PLAN

-
 Limit  (cost=0.00..5456.11 rows=501 width=912)
   -  Index Scan Backward using ix_objects_type_lastmodified on objects
(cost=0.00..253083.03 rows=23239 width=912)
 Index Cond: ((type)::text = 'cpe'::text)
 Filter: ((domainid)::text = ANY
(('{HY3XGEzC0E9JxRwoXLOLbjNsghEA,3330}'::charact
er varying[])::text[]))
(4 rows)


Given the nature of the ix_objects_type_lastmodified index, wondering if
the index requires rebuilt. I tested rebuilding it in another db, and it
came to 2500 pages as opposed to 38640 pages.

The puzzle being why the same query with same filters, runs most of
times faster but at times runs 5+ mintues and it switches back to fast
mode. If it had used a different execution plan than the above, how do I
list all execution plans executed for a given SQL.

Thanks,
Stalin

-Original Message-
From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov] 
Sent: Monday, August 03, 2009 1:45 PM
To: Subbiah Stalin-XCGF84; pgsql-performance@postgresql.org
Subject: RE: [PERFORM] Query help

Subbiah Stalin-XCGF84 ssubb...@motorola.com wrote:
 
 Shared buffer=8G, effective cache size=4G.
 
That is odd; if your shared buffers are at 8G, you must have more than
4G of cache.  How much RAM is used for cache at the OS level? 
Normally you would add that to the shared buffers to get your effective
cache size, or at least take the larger of the two.
 
How much RAM is on this machine in total?  Do you have any other
processes which use a lot of RAM or might access a lot of disk from time
to time?
 
 Let me know if you need any other information.
 
The \d output for the object table, or the CREATE for it and its
indexes, would be good.  Since it's getting through the random reads by
the current plan at the rate of about one every 5ms, I'd say your drive
array is OK.  If you want to make this query faster you've either got to
have the data in cache or it has to have reason to believe that a
different plan is faster.
 
One thing which might help is to boost your work_mem setting

Re: [PERFORM] Query help

2009-08-04 Thread Subbiah Stalin-XCGF84
Thanks for the response kevin.

DB size is about 30G. Bloat could have been due to recent load testing
that was done. Autovaccum wasn't aggressive enough to catch up with load
testing. I will rebuild those indexes if possible reload the table
itself as they are bloated too.

Sure I will collect necessary stats on the next occurrence of the slow
query. 

Stalin

-Original Message-
From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov] 
Sent: Tuesday, August 04, 2009 8:57 AM
To: Subbiah Stalin-XCGF84; pgsql-performance@postgresql.org
Subject: RE: [PERFORM] Query help

Subbiah Stalin-XCGF84 ssubb...@motorola.com wrote:
 
 Server has 32G memory and it's a dedicated to run PG and no other 
 application is sharing this database.
 
It's not likely to help with this particular problem, but it's generally
best to start from a position of letting the optimizer know what it's
really got for resources.  An effective cache size of somewhere around
30GB would probably be best here.
 
 Given the nature of the ix_objects_type_lastmodified index, wondering 
 if the index requires rebuilt. I tested rebuilding it in another db, 
 and it came to 2500 pages as opposed to 38640 pages.
 
That's pretty serious bloat.  Any idea how that happened?  Have you had
long running database transaction which might have prevented normal
maintenance from working?  If not, you may need more aggressive settings
for autovacuum.  Anyway, sure, try this with the index rebuilt.  If you
don't want downtime, use CREATE INDEX CONCURRENTLY and then drop the old
index.  (You could then rename the new index to match the old, if
needed.)
 
 The puzzle being why the same query with same filters, runs most of 
 times faster but at times runs 5+ mintues and it switches back to fast

 mode.
 
It is likely either that something has pushed the relevant data out of
cache before the slow runs, or there is blocking.  How big is this
database?  Can you get a list of pg_stat_activity and pg_locks during an
episode of slow run time?
 
 If it had used a different execution plan than the above, how do I 
 list all execution plans executed for a given SQL.
 
It's unlikely that the slow runs are because of a different plan being
chosen.  I was wondering if a better plan might be available, but this
one looks pretty good with your current indexes.  I can think of an
indexing change or two which *might* cause the optimizer to pick a
different plan, but that is far from certain, and without knowing the
cause of the occasional slow runs, it's hard to be sure that the new
plan wouldn't get stalled for the same reasons.
 
If it's possible to gather more data during an episode of a slow run,
particularly the pg_stat_activity and pg_locks lists, run as the
database superuser, it would help pin down the cause.  A vmstat during
such an episode, to compare to a normal one, might also be
instructive.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Query help

2009-08-05 Thread Subbiah Stalin-XCGF84
We have found the problem. Apparently there was a query doing count on
45 million rows table run prior to the episode of slow query. Definitely
cached data is pushed out the memory. Is there way to assign portion of
memory to recycling purposes like in oracle, so the cached data doesn't
get affected by queries like these.

Stalin

-Original Message-
From: Kevin Grittner [mailto:kevin.gritt...@wicourts.gov] 
Sent: Tuesday, August 04, 2009 8:57 AM
To: Subbiah Stalin-XCGF84; pgsql-performance@postgresql.org
Subject: RE: [PERFORM] Query help

Subbiah Stalin-XCGF84 ssubb...@motorola.com wrote:
 
 Server has 32G memory and it's a dedicated to run PG and no other 
 application is sharing this database.
 
It's not likely to help with this particular problem, but it's generally
best to start from a position of letting the optimizer know what it's
really got for resources.  An effective cache size of somewhere around
30GB would probably be best here.
 
 Given the nature of the ix_objects_type_lastmodified index, wondering 
 if the index requires rebuilt. I tested rebuilding it in another db, 
 and it came to 2500 pages as opposed to 38640 pages.
 
That's pretty serious bloat.  Any idea how that happened?  Have you had
long running database transaction which might have prevented normal
maintenance from working?  If not, you may need more aggressive settings
for autovacuum.  Anyway, sure, try this with the index rebuilt.  If you
don't want downtime, use CREATE INDEX CONCURRENTLY and then drop the old
index.  (You could then rename the new index to match the old, if
needed.)
 
 The puzzle being why the same query with same filters, runs most of 
 times faster but at times runs 5+ mintues and it switches back to fast

 mode.
 
It is likely either that something has pushed the relevant data out of
cache before the slow runs, or there is blocking.  How big is this
database?  Can you get a list of pg_stat_activity and pg_locks during an
episode of slow run time?
 
 If it had used a different execution plan than the above, how do I 
 list all execution plans executed for a given SQL.
 
It's unlikely that the slow runs are because of a different plan being
chosen.  I was wondering if a better plan might be available, but this
one looks pretty good with your current indexes.  I can think of an
indexing change or two which *might* cause the optimizer to pick a
different plan, but that is far from certain, and without knowing the
cause of the occasional slow runs, it's hard to be sure that the new
plan wouldn't get stalled for the same reasons.
 
If it's possible to gather more data during an episode of a slow run,
particularly the pg_stat_activity and pg_locks lists, run as the
database superuser, it would help pin down the cause.  A vmstat during
such an episode, to compare to a normal one, might also be
instructive.
 
-Kevin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Free memory usage Sol10, 8.2.9

2009-11-03 Thread Subbiah Stalin-XCGF84
All,
 
I'm trying to understand the free memory usage and why it falls below
17G sometimes and what could be causing it. Any pointers would be
appreciated.
 
r...@prod1 # prtconf
System Configuration:  Sun Microsystems  sun4u
Memory size: 32768 Megabytes
 
[postg...@prod1 ~]$ vmstat 5 10
 kthr  memorypagedisk  faults
cpu
 r b w   swap  free  re  mf pi po fr de sr 1m 1m 1m m1   in   sy   cs us
sy id
 0 0 0 51713480 21130304 58 185 325 104 104 0 0 23 3 7 1 488 604  573  1
2 97
 0 0 0 51048768 18523456 6 10 0 192 192 0 0 4  0  3  0  527  753  807  2
1 97
 0 0 0 51713480 21130304 58 185 325 104 104 0 0 1 23 3 7 488 604  573  1
2 97
 0 0 0 51067112 18538472 0 1 0 171 171 0 0  4  8  0  4  522  573  740  2
1 97
 0 0 0 51072744 18542992 0 0 0 187 187 0 0  0 22  0  7  532  657  780  2
1 97
 0 0 0 51069944 18540736 146 1729 3 174 174 0 0 0 9 0 3 526 3227  944  4
5 91
 0 0 0 51065728 18537360 32 33 0 192 192 0 0 0 20 0  3  522 1147  927  3
2 95
 0 0 0 51065728 18537336 0 0 0 190 190 0 0  0 26  0  3  517  628  789  2
1 97
 0 0 0 51065728 18537336 0 0 0 168 168 0 0  0 25  0 11  517  668  810  2
2 96
 0 0 0 51062960 18535152 0 165 2 190 190 0 0 14 29 0 4  552  732  808  2
1 97
 
prstat -am
 
 NPROC USERNAME  SWAP   RSS MEMORY  TIME  CPU
21 postgres 8312M 8300M25% 112:24:15 2.1%
53 root  347M  236M   0.7% 130:52:02 0.1%
 7 daemon708M  714M   2.2%  21:53:05 0.0%
 4 mot  5552K   15M   0.0%   0:00:00 0.0%
 1 smmsp1384K 5480K   0.0%   0:00:59 0.0%


[postg...@prod1]$ ps -eaf | grep postgres | wc -l
24
 
max_connections = 600
shared_buffers = 8000MB 
temp_buffers = 8MB
work_mem = 2MB  
maintenance_work_mem = 256MB
max_fsm_pages = 2048000
max_fsm_relations = 2000
effective_cache_size = 4000MB
 
Thanks,
Stalin