Re: [PERFORM] How to ENABLE SQL capturing???

2007-08-10 Thread smiley2211

Hello all,

I have ENABLED this 'log_min_duration_statement = 100 but I can't figure
out WHERE it's writing the commands to ...I have it set to 'syslogs' but
this file is 0 bytes :confused:

Should I set other parameters in my postgresql.conf file???

Thanks...Michelle


Bryan Murphy-3 wrote:
 
 we currently have logging enabled for all queries over 100ms, and keep
 the last 24 hours of logs before we rotate them.  I've found this tool
 very helpful in diagnosing new performance problems that crop up:
 
 http://pgfouine.projects.postgresql.org/
 
 Bryan
 
 On 8/8/07, Steinar H. Gunderson [EMAIL PROTECTED] wrote:
 On Wed, Aug 08, 2007 at 01:02:24PM -0700, smiley2211 wrote:
  I am trying to enable capturing of the submitted code via an
  application...how do I do this in Postgres?  Performance is SLOW on my
  server and I have autovacuum enabled as well as rebuilt
 indexes...whatelse
  should be looked at?

 Try log_min_duration_statement = 100 in postgresql.conf; it will show
 all
 statements that take more than 100ms. Set to 0 to log _all_ statements,
 or
 -1 to turn the logging back off.

 /* Steinar */
 --
 Homepage: http://www.sesse.net/

 ---(end of broadcast)---
 TIP 6: explain analyze is your friend

 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq
 
 

-- 
View this message in context: 
http://www.nabble.com/How-to-ENABLE-SQL-capturingtf4238694.html#a12096180
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] CLOG Patch

2007-08-10 Thread Jignesh K. Shah
I tried with CLOG  24 also and I got linear performance upto 1250 users 
after which it started to tank. 32 got us to 1350 users before some 
other bottleneck overtook it.



Based on what Tom said earlier, it might then make sense to make it a 
tunable with the default of 8 but something one can change for high 
number of users.



Thanks.
Regards,
Jignesh


Simon Riggs wrote:

On Fri, 2007-08-03 at 16:09 -0400, Jignesh K. Shah wrote:

  
This patch seems to work well (both with 32 and 64 value but not with 16 
and the default 8). 



Could you test at 24 please also? Tom has pointed out the additional
cost of setting this higher, even in workloads that don't benefit from
the I/O-induced contention reduction.

  

Is there a way we can integrate this in 8.3?



I just replied to Josh's thread on -hackers about this.

  
This will improve out of box performance quite a bit for high number of 
users (atleat 30% in my OLTP test)



Yes, thats good. Will this have a dramatic effect on a particular
benchmark, or for what reason might we need this? Tom has questioned the
use case here, so I think it would be good to explain a little more for
everyone. Thanks.

  


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Help optimize view

2007-08-10 Thread Relyea, Mike
Oops.  Realized I posted the wrong SQL and EXPLAIN ANALYZE results.
Also forgot to mention that my server has 1.5 GB memory.


SELECT PrintSamples.MachineID, PrintSamples.PrintCopyID,
tblColors.ColorID, avg(ParameterValues.ParameterValue) AS
Mottle_NMF
   FROM AnalysisModules
   JOIN (tblColors
   JOIN (tblTPNamesAndColors
   JOIN PrintSamples ON tblTPNamesAndColors.TestPatternName::text
= PrintSamples.TestPatternName::text
   JOIN (DigitalImages
   JOIN PrintSampleAnalyses ON DigitalImages.ImageID =
PrintSampleAnalyses.ImageID
   JOIN (ParameterNames
   JOIN (Measurements
   JOIN ParameterValues ON Measurements.MeasurementID =
ParameterValues.MeasurementID) ON ParameterNames.ParameterID =
ParameterValues.ParameterID) ON PrintSampleAnalyses.psaID =
Measurements.psaID) ON PrintSamples.PrintSampleID =
DigitalImages.PrintSampleID) ON tblColors.ColorID =
tblTPNamesAndColors.ColorID) ON AnalysisModules.MetricID =
Measurements.MetricID
  GROUP BY PrintSamples.MachineID, PrintSamples.PrintCopyID,
tblColors.ColorID, AnalysisModules.AnalysisModuleName,
ParameterNames.ParameterName, PrintSamples.TestPatternName
 HAVING AnalysisModules.AnalysisModuleName::text = 'NMF'::text AND
ParameterNames.ParameterName::text = 'NMF'::text AND
tblColors.ColorID  3 AND PrintSamples.TestPatternName::text ~~
'IQAF-TP8%'::text;


QUERY PLAN
HashAggregate (cost=519801.96..519898.00 rows=7683 width=70) (actual
time=106219.710..106249.456 rows=14853 loops=1)
 - Hash Join (cost=286101.76..519667.51 rows=7683 width=70) (actual
time=50466.513..106111.635 rows=15123 loops=1)
Hash Cond: (Measurements.MetricID =
AnalysisModules.MetricID)
- Hash Join (cost=286099.98..519260.45 rows=87588 width=61) (actual
time=50466.417..106055.182 rows=15123 loops=1)
   Hash Cond: (ParameterValues.MeasurementID =
Measurements.MeasurementID)
   - Nested Loop (cost=8054.81..238636.75 rows=454040 width=21)
(actual time=143.017..55178.583 rows=289724 loops=1)
  - Seq Scan on ParameterNames (cost=0.00..1.94 rows=1
width=17) (actual time=0.012..0.027 rows=1 loops=1)
 Filter: ((ParameterName)::text = 'NMF'::text)
  - Bitmap Heap Scan on ParameterValues
(cost=8054.81..231033.70 rows=608089 width=12)

(actual time=142.986..54432.650 rows=289724 loops=1)
 Recheck Cond: (ParameterNames.ParameterID =
ParameterValues.ParameterID)
 - Bitmap Index Scan on PVParameterID_idx
(cost=0.00..7902.79 rows=608089 width=0)

(actual time=109.178..109.178 rows=289724 loops=1)
Index Cond: (ParameterNames.ParameterID =
ParameterValues.ParameterID)
   - Hash (cost=259861.12..259861.12 rows=1454724 width=48) (actual
time=50306.950..50306.950 rows=961097 loops=1)
  - Hash Join (cost=8139.75..259861.12 rows=1454724 width=48)
(actual time=971.910..48649.190
rows=961097 loops=1)
 Hash Cond: (Measurements.psaID =
PrintSampleAnalyses.psaID)
 - Seq Scan on Measurements (cost=0.00..199469.09
rows=7541009 width=12)
(actual
time=0.047..35628.599 rows=7539838 loops=1)
 - Hash (cost=7949.67..7949.67 rows=15206 width=44) (actual
time=971.734..971.734 rows=18901 loops=1)
- Hash Join (cost=5069.24..7949.67 rows=15206 width=44)
(actual time=590.003..938.744 rows=18901 loops=1)
   Hash Cond: (PrintSampleAnalyses.ImageID =
DigitalImages.ImageID)
   - Seq Scan on PrintSampleAnalyses
(cost=0.00..2334.25 rows=78825 width=8)

(actual time=0.021..130.335 rows=78859 loops=1)
   - Hash (cost=4879.10..4879.10 rows=15211 width=44)
(actual time=589.940..589.940
rows=18901 loops=1)
  - Hash Join (cost=2220.11..4879.10 rows=15211
width=44)
(actual
time=168.307..557.675 rows=18901 loops=1)
 Hash Cond: (DigitalImages.PrintSampleID =
PrintSamples.PrintSampleID)
 - Seq Scan on DigitalImages
(cost=0.00..1915.50 rows=78850 width=8)

(actual time=16.126..194.911 rows=78859 loops=1)
 - Hash (cost=2029.98..2029.98 rows=15211
width=44)
(actual
time=152.128..152.128 rows=18645 loops=1)
- Hash Join (cost=564.39..2029.98
rows=15211 width=44)
(actual
time=13.951..121.903 rows=18645 loops=1)
   Hash Cond:
((PrintSamples.TestPatternName)::text =

(tblTPNamesAndColors.TestPatternName)::text)
   - Bitmap Heap Scan on PrintSamples
(cost=561.39..1781.53 rows=24891 width=40)

(actual time=13.680..59.919 rows=24914 loops=1)
  Filter: ((TestPatternName)::text ~~
'IQAF-TP8%'::text)
  - Bitmap 

[PERFORM] Bitmap Index Scan optimization opportunity

2007-08-10 Thread Kevin Grittner
These query times are the fully cached times for both, from doing a previous 
run of the same query.  (The first one took 193.772 ms on its first run; I 
don't have a good uncached timing for the second one at this point.)
 
It seems like the first query could move the searchName filter to the Bitmap 
Index Scan phase, and save 97.5% of the page retrievals in the Bitmap Heap Scan.
 
-Kevin
 
 
cc= explain analyze select * from Warrant where soundex = 'S530' and 
searchName like '%,G%' and countyNo = 40;
   QUERY PLAN

 Bitmap Heap Scan on Warrant  (cost=55.37..1202.35 rows=841 width=123) 
(actual time=2.625..8.602 rows=112 loops=1)
   Recheck Cond: (((soundex)::text = 'S530'::text) AND ((countyNo)::smallint 
= 40))
   Filter: ((searchName)::text ~~ '%,G%'::text)
   -  Bitmap Index Scan on Warrant_WarrantSoundex  (cost=0.00..55.16 
rows=4240 width=0) (actual time=1.911..1.911 rows=4492 loops=1)
 Index Cond: (((soundex)::text = 'S530'::text) AND 
((countyNo)::smallint = 40))
 Total runtime: 8.739 ms
(6 rows)

cc= explain analyze select * from Warrant where soundex = 'S530' and 
searchName like 'SMITH,G%' and countyNo = 40;
 
QUERY PLAN

 Index Scan using Warrant_WarrantName on Warrant  (cost=0.00..1.28 rows=1 
width=123) (actual time=0.099..0.397 rows=112 loops=1)
   Index Cond: (((searchName)::text = 'SMITH,G'::character varying) AND 
((searchName)::text  'SMITH,H'::character varying) AND 
((countyNo)::smallint = 40))
   Filter: (((soundex)::text = 'S530'::text) AND ((searchName)::text ~~ 
'SMITH,G%'::text))
 Total runtime: 0.510 ms
(4 rows)

cc= \d Warrant
Table public.Warrant
 Column |  Type   | Modifiers
+-+---
 warrantSeqNo   | WarrantSeqNoT | not null
 countyNo   | CountyNoT | not null
 caseNo | CaseNoT   | not null
 nameL  | LastNameT | not null
 partyNo| PartyNoT  | not null
 searchName | SearchNameT   | not null
 soundex| SoundexT  | not null
 authSeqNo  | HistSeqNoT|
 dateAuthorized | DateT |
 dateDisposed   | DateT |
 dateIssued | DateT |
 dispoMethod| EventTypeT|
 dispSeqNo  | HistSeqNoT|
 histSeqNo  | HistSeqNoT|
 nameF  | FirstNameT|
 nameM  | MiddleNameT   |
 stayDate   | DateT |
 stayTime   | TimeT |
 suffix | NameSuffixT   |
 warrantDob | DateT |
Indexes:
Warrant_pkey PRIMARY KEY, btree (warrantSeqNo, countyNo)
Warrant_HistSeqNo UNIQUE, btree (caseNo, histSeqNo, countyNo, 
warrantSeqNo)
Warrant_AuthSeqNo btree (caseNo, authSeqNo, countyNo)
Warrant_CaseNo btree (caseNo, partyNo, countyNo)
Warrant_DispSeqNo btree (caseNo, dispSeqNo, countyNo)
Warrant_WarrantName btree (searchName, countyNo)
Warrant_WarrantSoundex btree (soundex, searchName, countyNo)

cc= select version();
   version
-
 PostgreSQL 8.2.4 on i686-pc-linux-gnu, compiled by GCC gcc (GCC) 3.3.3 (SuSE 
Linux)
(1 row)



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] select count(*) performance

2007-08-10 Thread Brian Hurt

runic wrote:


Hello Group,

I'm new in PostgreSQL Business, therefore please forgive me a newbie
Question. I have a table with ca. 1.250.000 Records. When I execute
a select count (*) from table (with pgAdmin III)  it takes about 40
secs.
I think that takes much to long. Can you please give me hints, where
I can search for Improvements?

TIA, Det
 



1) VACUUM FULL the table, maybe the whole database.
2) Buy more/faster hard disks

The problem is that count(*) on a table has to scan the whole table, due 
to the fact that Postgres uses MVCC for it's concurrency control.  This 
is normally a huge win- but one of the few places where it's a loss is 
doing count(*) over a whole table.  In this case, Postgres has no choice 
but to inspect each and every row to see if it's live or not, and thus 
has no choice but to read in the whole table.


If you've been doing a lot of inserts, updates, and/or deletes to the 
table, and you either don't have autovacuum turned on or agressive 
enough, the table can be littered with a bunch of dead rows that haven't 
been deleted yet.  Postgres still has to read in those rows to make sure 
they're dead, so it's easy for it to have to read many multiples of the 
number of live rows in the table.  What vacuum does is it goes through 
and deletes those dead rows.


If that isn't the problem, then it's just that you have to read the 
whole table.  If the rows are large enough, and the disk subsystem is 
slow enough, this can just take a while.  My advice in this case to buy 
either more disks and/or faster disks, to speed up the reading of the table.


Brian


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] Dell Hardware Recommendations

2007-08-10 Thread Merlin Moncure
On 8/10/07, Arjen van der Meijden [EMAIL PROTECTED] wrote:
 On 9-8-2007 23:50 Merlin Moncure wrote:
  Where the extra controller especially pays off is if you have to
  expand to a second tray.  It's easy to add trays but installing
  controllers on a production server is scary.

 For connectivity-sake that's not a necessity. You can either connect
 (two?) extra MD1000's to your first MD1000 or you can use the second
 external SAS-port on your controller. Obviously it depends on the
 controller whether its good enough to just add the disks to it, rather
 than adding another controller for the second tray. Whether the perc5/e
 is good enough for that, I don't know, we've only equipped ours with a
 single MD1000 holding 15x 15k rpm drives, but in our benchmarks it
 scaled pretty well going from a few to all 14 disks (+1 hotspare).

completely correctI was suggesting this on performance
terms...I've never done it with the Perc/5, but have done it with some
active/active SANs and it works really well.

merlin

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PERFORM] Dell Hardware Recommendations

2007-08-10 Thread Merlin Moncure
On 8/9/07, Arjen van der Meijden [EMAIL PROTECTED] wrote:
 On 9-8-2007 23:50 Merlin Moncure wrote:
  Where the extra controller especially pays off is if you have to
  expand to a second tray.  It's easy to add trays but installing
  controllers on a production server is scary.

 For connectivity-sake that's not a necessity. You can either connect
 (two?) extra MD1000's to your first MD1000 or you can use the second
 external SAS-port on your controller. Obviously it depends on the
 controller whether its good enough to just add the disks to it, rather
 than adding another controller for the second tray. Whether the perc5/e
 is good enough for that, I don't know, we've only equipped ours with a
 single MD1000 holding 15x 15k rpm drives, but in our benchmarks it
 scaled pretty well going from a few to all 14 disks (+1 hotspare).

As it happens I will have an opportunity to test the dual controller
theory.   In about a week we are picking up another md1000 and will
attach it in an active/active configuration with various
hardware/software RAID configurations, and run a battery of database
centric tests.  Results will follow.

By the way, the recent dell severs I have seen are well built in my
opinion...better and cheaper than comparable IBM servers.  I've also
tested the IBM exp3000, and the MD1000 is cheaper and comes standard
with a second ESM.  In my opinion, the Dell 1U 1950 is extremely well
organized in terms of layout and cooling...dual power supplies, dual
PCI-E (one low profile), plus a third custom slot for the optional
perc 5/i which drives the backplane.

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] CLOG Patch

2007-08-10 Thread Simon Riggs
On Fri, 2007-08-10 at 13:54 -0400, Jignesh K. Shah wrote:
 I tried with CLOG  24 also and I got linear performance upto 1250 users 
 after which it started to tank. 32 got us to 1350 users before some 
 other bottleneck overtook it.

Jignesh,

Thanks for testing that.

It's not very clear to everybody why an extra 100 users is useful and it
would certainly help your case if you can explain.

-- 
  Simon Riggs
  EnterpriseDB  http://www.enterprisedb.com


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Bitmap Index Scan optimization opportunity

2007-08-10 Thread Heikki Linnakangas
Kevin Grittner wrote:
 These query times are the fully cached times for both, from doing a 
 previous run of the same query.  (The first one took 193.772 ms on its first 
 run; I don't have a good uncached timing for the second one at this point.)
  
 It seems like the first query could move the searchName filter to the Bitmap 
 Index Scan phase, and save 97.5% of the page retrievals in the Bitmap Heap 
 Scan.

Yes it could in theory, but unfortunately the planner/executor doesn't
have the capability to do that. An indexed value is never handed back
from the index; the indexed values are only used to satisfy index
conditions, not filters. It's been discussed before (see
http://archives.postgresql.org/pgsql-performance/2006-09/msg00080.php),
but it's not easy to implement so no one's done it yet.

-- 
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PERFORM] Dell Hardware Recommendations

2007-08-10 Thread Merlin Moncure
On 8/10/07, Decibel! [EMAIL PROTECTED] wrote:
 On Thu, Aug 09, 2007 at 05:50:10PM -0400, Merlin Moncure wrote:
  Raid 10 is usually better for databases but in my experience it's a
  roll of the dice.  If you factor cost into the matrix a SAS raid 05
  might outperform a SATA raid 10 because you are getting better storage
  utilization out of the drives (n - 2 vs. n / 2).  Then again, you
  might not.

 It's going to depend heavily on the controller and the workload.
 Theoretically, if most of your writes are to stripes that the controller
 already has cached then you could actually out-perform RAID10. But
 that's a really, really big IF, because if the strip isn't in cache you
 have to read the entire thing in before you can do the write... and that
 costs *a lot*.

 Also, a good RAID controller can spread reads out across both drives in
 each mirror on a RAID10. Though, there is an argument for not doing
 that... it makes it much less likely that both drives in a mirror will
 fail close enough to each other that you'd lose that chunk of data.

 Speaking of failures, keep in mind that a normal RAID5 puts you only 2
 drive failures away from data loss, while with RAID10 you can
 potentially lose half the array without losing any data. If you do RAID5
 with multiple parity copies that does change things; I'm not sure which
 is better at that point (I suspect it matters how many drives are
 involved).

when making hardware recommendations I always suggest to buy two
servers and rig PITR with warm standby.  This allows to adjust the
system a little bit for performance over fault tolerance.

Regarding raid controllers, I've found performance to be quite
variable as stated, especially with regards to RAID 5.  I've also
unfortunately found bonnie++ to not be very reflective of actual
performance in high stress environments.  We have a IBM DS4200 that
bangs out some pretty impressive numbers with our app using sata while
the bonnie++ numbers fairly suck.

merlin

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] Dell Hardware Recommendations

2007-08-10 Thread Merlin Moncure
On 8/10/07, Vivek Khera [EMAIL PROTECTED] wrote:

 On Aug 9, 2007, at 3:47 PM, Joe Uhl wrote:

  PowerEdge 1950 paired with a PowerVault MD1000
  2 x Quad Core Xeon E5310
  16 GB 667MHz RAM (4 x 4GB leaving room to expand if we need to)
  PERC 5/E Raid Adapter
  2 x 146 GB SAS in Raid 1 for OS + logs.
  A bunch of disks in the MD1000 configured in Raid 10 for Postgres
  data.

 I'd avoid Dell disk systems if at all possible.  I know, I've been
 through the pain. You really want someone else providing your RAID
 card and disk array, especially if the 5/E card is based on the
 Adaptec devices.

I'm not so sure I agree.  They are using LSI firmware now (and so is
everyone else).  The servers are well built (highly subjective, I
admit) and configurable.  I have had some bad experiences with IBM
gear (adaptec controller though), and white box parts 3ware, etc.  I
can tell you that dell got us the storage and the server in record
time

do agree on adaptec however

merlin

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PERFORM] Dell Hardware Recommendations

2007-08-10 Thread Joel Fradkin
I know we bough the 4 proc opteron unit with the sas jbod from dell and it
has been extremely excellent in terms of performance.

Was like 3 times faster the our old dell 4 proc which had xeon processors.

The newer one has had a few issues (I am running redhat as4 since dell
supports it. I have had one kernel failure (but it has been up for like a
year). Other then that no issues a reboot fixed whatever caused the failure
and I have not seen it happen again and its been a few months.

I am definitely going dell for any other server needs their pricing is so
competitive now and the machines I bought both the 1u 2 proc and the larger
4 proc have been very good.

Joel Fradkin

 

Wazagua, Inc.
2520 Trailmate Dr
Sarasota, Florida 34243
Tel.  941-753-7111 ext 305

 

[EMAIL PROTECTED]
www.wazagua.com
Powered by Wazagua
Providing you with the latest Web-based technology  advanced tools.
C 2004. WAZAGUA, Inc. All rights reserved. WAZAGUA, Inc
 This email message is for the use of the intended recipient(s) and may
contain confidential and privileged information.  Any unauthorized review,
use, disclosure or distribution is prohibited.  If you are not the intended
recipient, please contact the sender by reply email and delete and destroy
all copies of the original message, including attachments.


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Merlin Moncure
Sent: Friday, August 10, 2007 1:31 PM
To: Arjen van der Meijden
Cc: Joe Uhl; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Dell Hardware Recommendations

On 8/9/07, Arjen van der Meijden [EMAIL PROTECTED] wrote:
 On 9-8-2007 23:50 Merlin Moncure wrote:
  Where the extra controller especially pays off is if you have to
  expand to a second tray.  It's easy to add trays but installing
  controllers on a production server is scary.

 For connectivity-sake that's not a necessity. You can either connect
 (two?) extra MD1000's to your first MD1000 or you can use the second
 external SAS-port on your controller. Obviously it depends on the
 controller whether its good enough to just add the disks to it, rather
 than adding another controller for the second tray. Whether the perc5/e
 is good enough for that, I don't know, we've only equipped ours with a
 single MD1000 holding 15x 15k rpm drives, but in our benchmarks it
 scaled pretty well going from a few to all 14 disks (+1 hotspare).

As it happens I will have an opportunity to test the dual controller
theory.   In about a week we are picking up another md1000 and will
attach it in an active/active configuration with various
hardware/software RAID configurations, and run a battery of database
centric tests.  Results will follow.

By the way, the recent dell severs I have seen are well built in my
opinion...better and cheaper than comparable IBM servers.  I've also
tested the IBM exp3000, and the MD1000 is cheaper and comes standard
with a second ESM.  In my opinion, the Dell 1U 1950 is extremely well
organized in terms of layout and cooling...dual power supplies, dual
PCI-E (one low profile), plus a third custom slot for the optional
perc 5/i which drives the backplane.

merlin

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] How to ENABLE SQL capturing???

2007-08-10 Thread Jeff Frost

Michelle,

What platform are you on?  If you're on linux, than logging to syslog will 
likely show up in the /var/log/messages file.


On Fri, 10 Aug 2007, smiley2211 wrote:



Hello all,

I have ENABLED this 'log_min_duration_statement = 100 but I can't figure
out WHERE it's writing the commands to ...I have it set to 'syslogs' but
this file is 0 bytes :confused:

Should I set other parameters in my postgresql.conf file???

Thanks...Michelle


Bryan Murphy-3 wrote:


we currently have logging enabled for all queries over 100ms, and keep
the last 24 hours of logs before we rotate them.  I've found this tool
very helpful in diagnosing new performance problems that crop up:

http://pgfouine.projects.postgresql.org/

Bryan

On 8/8/07, Steinar H. Gunderson [EMAIL PROTECTED] wrote:

On Wed, Aug 08, 2007 at 01:02:24PM -0700, smiley2211 wrote:

I am trying to enable capturing of the submitted code via an
application...how do I do this in Postgres?  Performance is SLOW on my
server and I have autovacuum enabled as well as rebuilt

indexes...whatelse

should be looked at?


Try log_min_duration_statement = 100 in postgresql.conf; it will show
all
statements that take more than 100ms. Set to 0 to log _all_ statements,
or
-1 to turn the logging back off.

/* Steinar */
--
Homepage: http://www.sesse.net/

---(end of broadcast)---
TIP 6: explain analyze is your friend



---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq







--
Jeff Frost, Owner   [EMAIL PROTECTED]
Frost Consulting, LLC   http://www.frostconsultingllc.com/
Phone: 650-780-7908 FAX: 650-649-1954

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PERFORM] Dell Hardware Recommendations

2007-08-10 Thread Vivek Khera


On Aug 9, 2007, at 3:47 PM, Joe Uhl wrote:


PowerEdge 1950 paired with a PowerVault MD1000
2 x Quad Core Xeon E5310
16 GB 667MHz RAM (4 x 4GB leaving room to expand if we need to)
PERC 5/E Raid Adapter
2 x 146 GB SAS in Raid 1 for OS + logs.
A bunch of disks in the MD1000 configured in Raid 10 for Postgres  
data.


I'd avoid Dell disk systems if at all possible.  I know, I've been  
through the pain. You really want someone else providing your RAID  
card and disk array, especially if the 5/E card is based on the  
Adaptec devices.



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] select count(*) performance

2007-08-10 Thread Merlin Moncure
On 8/8/07, runic [EMAIL PROTECTED] wrote:
 Hello Group,

 I'm new in PostgreSQL Business, therefore please forgive me a newbie
 Question. I have a table with ca. 1.250.000 Records. When I execute
 a select count (*) from table (with pgAdmin III)  it takes about 40
 secs.
 I think that takes much to long. Can you please give me hints, where
 I can search for Improvements?

This is a FAQ.  This operation is optimized in some other database
engines but not in PostgreSQL due to way the locking engine works.
There are many workarounds, maybe the easiest is to get an approximate
count using
select reltuples from pg_class where relname = 'your_table' and relkind = 'r';

merlin

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] How to ENABLE SQL capturing???

2007-08-10 Thread smiley2211

Jeff,

You are CORRECT...my queries were going to /var/log/messages...had to get
the Linux Admin to grant me READ access to the file...

Thanks for your reply.
Michelle.


Jeff Frost wrote:
 
 Michelle,
 
 What platform are you on?  If you're on linux, than logging to syslog will 
 likely show up in the /var/log/messages file.
 
 On Fri, 10 Aug 2007, smiley2211 wrote:
 

 Hello all,

 I have ENABLED this 'log_min_duration_statement = 100 but I can't figure
 out WHERE it's writing the commands to ...I have it set to 'syslogs' but
 this file is 0 bytes :confused:

 Should I set other parameters in my postgresql.conf file???

 Thanks...Michelle


 Bryan Murphy-3 wrote:

 we currently have logging enabled for all queries over 100ms, and keep
 the last 24 hours of logs before we rotate them.  I've found this tool
 very helpful in diagnosing new performance problems that crop up:

 http://pgfouine.projects.postgresql.org/

 Bryan

 On 8/8/07, Steinar H. Gunderson [EMAIL PROTECTED] wrote:
 On Wed, Aug 08, 2007 at 01:02:24PM -0700, smiley2211 wrote:
 I am trying to enable capturing of the submitted code via an
 application...how do I do this in Postgres?  Performance is SLOW on my
 server and I have autovacuum enabled as well as rebuilt
 indexes...whatelse
 should be looked at?

 Try log_min_duration_statement = 100 in postgresql.conf; it will show
 all
 statements that take more than 100ms. Set to 0 to log _all_ statements,
 or
 -1 to turn the logging back off.

 /* Steinar */
 --
 Homepage: http://www.sesse.net/

 ---(end of
 broadcast)---
 TIP 6: explain analyze is your friend


 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq




 
 -- 
 Jeff Frost, Owner [EMAIL PROTECTED]
 Frost Consulting, LLC http://www.frostconsultingllc.com/
 Phone: 650-780-7908   FAX: 650-649-1954
 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match
 
 

-- 
View this message in context: 
http://www.nabble.com/How-to-ENABLE-SQL-capturingtf4238694.html#a12099590
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Help optimize view

2007-08-10 Thread Tom Lane
Relyea, Mike [EMAIL PROTECTED] writes:
 SELECT PrintSamples.MachineID, PrintSamples.PrintCopyID,
 tblColors.ColorID, avg(ParameterValues.ParameterValue) AS
 Mottle_NMF
FROM AnalysisModules
JOIN (tblColors
JOIN (tblTPNamesAndColors
JOIN PrintSamples ON tblTPNamesAndColors.TestPatternName::text
 =3D PrintSamples.TestPatternName::text
JOIN (DigitalImages
JOIN PrintSampleAnalyses ON DigitalImages.ImageID =3D
 PrintSampleAnalyses.ImageID
JOIN (ParameterNames
JOIN (Measurements
JOIN ParameterValues ON Measurements.MeasurementID =3D
 ParameterValues.MeasurementID) ON ParameterNames.ParameterID =3D
 ParameterValues.ParameterID) ON PrintSampleAnalyses.psaID =3D
 Measurements.psaID) ON PrintSamples.PrintSampleID =3D
 DigitalImages.PrintSampleID) ON tblColors.ColorID =3D
 tblTPNamesAndColors.ColorID) ON AnalysisModules.MetricID =3D
 Measurements.MetricID

Try increasing join_collapse_limit --- you have just enough tables here
that the planner isn't going to consider all possible join orders.
And it sorta looks like it's picking a bad one.

regards, tom lane

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PERFORM] How to ENABLE SQL capturing???

2007-08-10 Thread Joshua D. Drake
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

smiley2211 wrote:
 Jeff,
 
 You are CORRECT...my queries were going to /var/log/messages...had to get
 the Linux Admin to grant me READ access to the file...

You may want to actually get that to stop. Syslog is a notorious
performance bottleneck for postgresql.

 
 Thanks for your reply.
 Michelle.
 
 
 Jeff Frost wrote:
 Michelle,

 What platform are you on?  If you're on linux, than logging to syslog will 
 likely show up in the /var/log/messages file.

 On Fri, 10 Aug 2007, smiley2211 wrote:

 Hello all,

 I have ENABLED this 'log_min_duration_statement = 100 but I can't figure
 out WHERE it's writing the commands to ...I have it set to 'syslogs' but
 this file is 0 bytes :confused:

 Should I set other parameters in my postgresql.conf file???

 Thanks...Michelle


 Bryan Murphy-3 wrote:
 we currently have logging enabled for all queries over 100ms, and keep
 the last 24 hours of logs before we rotate them.  I've found this tool
 very helpful in diagnosing new performance problems that crop up:

 http://pgfouine.projects.postgresql.org/

 Bryan

 On 8/8/07, Steinar H. Gunderson [EMAIL PROTECTED] wrote:
 On Wed, Aug 08, 2007 at 01:02:24PM -0700, smiley2211 wrote:
 I am trying to enable capturing of the submitted code via an
 application...how do I do this in Postgres?  Performance is SLOW on my
 server and I have autovacuum enabled as well as rebuilt
 indexes...whatelse
 should be looked at?
 Try log_min_duration_statement = 100 in postgresql.conf; it will show
 all
 statements that take more than 100ms. Set to 0 to log _all_ statements,
 or
 -1 to turn the logging back off.

 /* Steinar */
 --
 Homepage: http://www.sesse.net/

 ---(end of
 broadcast)---
 TIP 6: explain analyze is your friend

 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?

http://www.postgresql.org/docs/faq



 -- 
 Jeff Frost, Owner[EMAIL PROTECTED]
 Frost Consulting, LLChttp://www.frostconsultingllc.com/
 Phone: 650-780-7908  FAX: 650-649-1954

 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match


 


- --

  === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564   24x7/Emergency: +1.800.492.2240
PostgreSQL solutions since 1997  http://www.commandprompt.com/
UNIQUE NOT NULL
Donate to the PostgreSQL Project: http://www.postgresql.org/about/donate
PostgreSQL Replication: http://www.commandprompt.com/products/

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.6 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFGvN2qATb/zqfZUUQRAmxSAJ96tbd3n12W79mxtad4dtD0F/7w6wCeI1uj
RpgRIKSMNrMHgm1wrCkqpjU=
=gJD2
-END PGP SIGNATURE-

---(end of broadcast)---
TIP 6: explain analyze is your friend