Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

2005-04-20 Thread J. Andrew Rogers
I've seen quite a few folks touting the Opteron as 2.5x 
faster with postgres than a Xeon box.  What makes the 
Opteron so quick?  Is it that Postgres really prefers to 
run in 64-bit mode?

I don't know about 2.5x faster (perhaps on specific types 
of loads), but the reason Opterons rock for database 
applications is their insanely good memory bandwidth and 
latency that scales much better than the Xeon.  Opterons 
also have a ccNUMA-esque I/O fabric and two dedicated 
on-die memory channels *per processor* -- no shared bus 
there, closer to real UNIX server iron than a glorified 
PC.

We run a large Postgres database on a dual Opteron in 
32-bit mode that crushes Xeons running at higher clock 
speeds.  It has little to do with bitness or theoretical 
instruction dispatch, and everything to do with the 
superior memory controller and I/O fabric.  Databases are 
all about moving chunks of data around and the Opteron 
systems were engineered to do this very well and in a very 
scalable fashion.  For the money, it is hard to argue with 
the price/performance of Opteron based servers.  We 
started with one dual Opteron postgres server just over a 
year ago (with an equivalent uptime) and have considered 
nothing but Opterons for database servers since.  Opterons 
really are clearly superior to Xeons for this application. 
I don't work for AMD, just a satisfied customer. :-)

re: 6 disks.  Unless you are tight on disk space, a hot 
spare might be nice as well depending on your needs.

Cheers,
J. Andrew Rogers
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

2005-04-20 Thread Jeff Frost
On Tue, 19 Apr 2005, J. Andrew Rogers wrote:
I don't know about 2.5x faster (perhaps on specific types of loads), but the 
reason Opterons rock for database applications is their insanely good memory 
bandwidth and latency that scales much better than the Xeon.  Opterons also 
have a ccNUMA-esque I/O fabric and two dedicated on-die memory channels *per 
processor* -- no shared bus there, closer to real UNIX server iron than a 
glorified PC.
Thanks J!  That's exactly what I was suspecting it might be.  Actually, I 
found an anandtech benchmark that shows the Opteron coming in at close to 2.0x 
performance:

http://www.anandtech.com/linux/showdoc.aspx?i=2163p=2
It's an Opteron 150 (2.4ghz) vs. Xeon 3.6ghz from August.  I wonder if the 
differences are more pronounced with the newer Opterons.

-Jeff
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [PERFORM] immutable functions vs. join for lookups ?

2005-04-20 Thread Dawid Kuroczko
On 4/20/05, Jim C. Nasby [EMAIL PROTECTED] wrote:
 You should re-run the function test using SQL as the function language
 instead of plpgsql. There might be some performance to be had there.

Yay!  You're right!  I wonder why have I forgotten about LANGUAGE SQL. :)
It's 30 seconds vs 5 seconds for CASE ... END insisde PLpgsql vs CASE...END
LANGUAGE SQL. :)  I.e. its almost the same as in-place entered SQL.

  Regards,
  Dawid

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] immutable functions vs. join for lookups ?

2005-04-20 Thread Christopher Kings-Lynne
Yay!  You're right!  I wonder why have I forgotten about LANGUAGE SQL. :)
It's 30 seconds vs 5 seconds for CASE ... END insisde PLpgsql vs CASE...END
LANGUAGE SQL. :)  I.e. its almost the same as in-place entered SQL.
Probably because simple SQL functions get inlined by the optimiser.
Chris
---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] How to improve db performance with $7K?

2005-04-20 Thread Dawid Kuroczko
On 4/19/05, Mohan, Ross [EMAIL PROTECTED] wrote:
 Clustered file systems is the first/best example that
 comes to mind. Host A and Host B can both request from diskfarm, eg.

Something like a Global File System?

http://www.redhat.com/software/rha/gfs/

(I believe some other company did develop it some time in the past;
hmm, probably the guys doing LVM stuff?).

Anyway the idea is that two machines have same filesystem mounted and
they share it. The locking I believe is handled by communication
between computers using host to host SCSI commands.

I never used it, I've only heard about it from a friend who used to
work with it in CERN.

   Regards,
   Dawid

---(end of broadcast)---
TIP 8: explain analyze is your friend


[PERFORM] When are index scans used over seq scans?

2005-04-20 Thread Richard van den Berg
We have a table with 1M rows that contain sessions with a start and
finish timestamps. When joining this table with a 10k table with rounded
timestamps, explain shows me sequential scans are used, and the join
takes about 6 hours (2s per seq scan on session table * 1):

 Nested Loop  (cost=252.80..233025873.16 rows=1035480320 width=97)
Join Filter: ((outer.starttime = inner.ts) AND (outer.finishtime
= inner.ts))
   -  Seq Scan on sessions us  (cost=0.00..42548.36 rows=924536
width=105)-  Materialize  (cost=252.80..353.60 rows=10080 width=8)
 -  Seq Scan on duration du  (cost=0.00..252.80 rows=10080 width=8)

However, during the initial loading of the data (we first load into text
tables, then convert to tables using timestamps etc, then run this
query) the same query took only 12 minutes. While debugging, I increased
cpu_tuple_cost to 0.1 (from 0.01). Now the explain shows an index scan,
and the run time comes down to 11 minutes:

 Nested Loop  (cost=0.00..667700310.42 rows=1035480320 width=97)
   -  Seq Scan on sessions us (cost=0.00..125756.60 rows=924536 width=105)
   -  Index Scan using ix_du_ts on duration du  (cost=0.00..604.46
rows=1120 width=8)
 Index Cond: ((outer.starttime = du.ts) AND
(outer.finishtime = du.ts))

I am glad that I found a way to force the use of the index, but still
can't explain why in the initial run the planner made the right choice,
but now I need to give it a hand. Could this have to do with the
statistics of the tables? I make very sure (during the initial load and
while testing) that I vacuum analyze all tables after I fill them.

I'm runing postgres 7.4.7.

Any help is appreciated.

-- 
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna-portal.net
Bazarstraat 44a|  www.trust-factory.com
2518AK The Hague   |  Phone: +31 70 3620684
The Netherlands|  Fax  : +31 70 3603009
---

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] When are index scans used over seq scans?

2005-04-20 Thread John A Meinel
Richard van den Berg wrote:
We have a table with 1M rows that contain sessions with a start and
finish timestamps. When joining this table with a 10k table with rounded
timestamps, explain shows me sequential scans are used, and the join
takes about 6 hours (2s per seq scan on session table * 1):
Nested Loop  (cost=252.80..233025873.16 rows=1035480320 width=97)
Join Filter: ((outer.starttime = inner.ts) AND (outer.finishtime

= inner.ts))

  -  Seq Scan on sessions us  (cost=0.00..42548.36 rows=924536
width=105)-  Materialize  (cost=252.80..353.60 rows=10080 width=8)
-  Seq Scan on duration du  (cost=0.00..252.80 rows=10080 width=8)
However, during the initial loading of the data (we first load into text
tables, then convert to tables using timestamps etc, then run this
query) the same query took only 12 minutes. While debugging, I increased
cpu_tuple_cost to 0.1 (from 0.01). Now the explain shows an index scan,
and the run time comes down to 11 minutes:
Nested Loop  (cost=0.00..667700310.42 rows=1035480320 width=97)
  -  Seq Scan on sessions us (cost=0.00..125756.60 rows=924536 width=105)
  -  Index Scan using ix_du_ts on duration du  (cost=0.00..604.46
rows=1120 width=8)
Index Cond: ((outer.starttime = du.ts) AND
(outer.finishtime = du.ts))
I am glad that I found a way to force the use of the index, but still
can't explain why in the initial run the planner made the right choice,
but now I need to give it a hand. Could this have to do with the
statistics of the tables? I make very sure (during the initial load and
while testing) that I vacuum analyze all tables after I fill them.
I'm runing postgres 7.4.7.
Any help is appreciated.

I believe the problem is that postgres doesn't recognize how restrictive
a date-range is unless it uses constants.
So saying:
select blah from du WHERE time between '2004-10-10' and '2004-10-15';
Will properly use the index, because it realizes it only returns a few rows.
However
select blah from du, us where du.ts between us.starttime and us.finishtime;
Doesn't know how selective that BETWEEN is.
This has been discussed as a future improvement to the planner (in
8.*).  I don't know the current status.
Also, in the future, you really should post your table schema, and
explain analyze instead of just explain. (I realize that with a 6hr
query it is a little painful.)
Notice that in the above plans, the expected number of rows drops from
10k down to 1k (which is probably where the planner decides to switch).
And if you actually did the analyze probably the number of rows is much
lower still.
Probably you should try to find out the status of multi-table
selectivity. It was discussed in the last couple of months.
John
=:-


signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] When are index scans used over seq scans?

2005-04-20 Thread Tom Lane
Richard van den Berg [EMAIL PROTECTED] writes:
 We have a table with 1M rows that contain sessions with a start and
 finish timestamps. When joining this table with a 10k table with rounded
 timestamps, explain shows me sequential scans are used, and the join
 takes about 6 hours (2s per seq scan on session table * 1):

  Nested Loop  (cost=252.80..233025873.16 rows=1035480320 width=97)
 Join Filter: ((outer.starttime = inner.ts) AND (outer.finishtime
 = inner.ts))
-  Seq Scan on sessions us  (cost=0.00..42548.36 rows=924536
 width=105)-  Materialize  (cost=252.80..353.60 rows=10080 width=8)
  -  Seq Scan on duration du  (cost=0.00..252.80 rows=10080 width=8)

The explain shows no such thing.  What is the *actual* runtime of
each plan per EXPLAIN ANALYZE, please?

(In general, any time you are complaining about planner misbehavior,
it is utterly pointless to give only planner estimates and not reality.
By definition, you don't think the estimates are right.)

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

2005-04-20 Thread William Yu
I posted this link a few months ago and there was some surprise over the 
difference in postgresql compared to other DBs. (Not much surprise in 
Opteron stomping on Xeon in pgsql as most people here have had that 
experience -- the surprise was in how much smaller the difference was in 
other DBs.) If it was across the board +100% in MS-SQL, MySQL, etc -- 
you can chalk in up to overall better CPU architecture. Most of the time 
though, the numbers I've seen show +0-30% for [insert DB here] and a 
huge whopping + for pgsql. Why the pronounced preference for 
postgresql, I'm not sure if it was explained fully.

BTW, the Anandtech test compares single CPU systems w/ 1GB of RAM. Go to 
dual/quad and SMP Xeon will suffer even more since it has to share a 
fixed amount of FSB/memory bandwidth amongst all CPUs. Xeons also seem 
to suffer more from context-switch storms. Go  4GB of RAM and the Xeon 
suffers another hit due to the lack of a 64-bit IOMMU. Devices cannot 
map to addresses  4GB which means the OS has to do extra work in 
copying data from/to  4GB anytime you have IO. (Although this penalty 
might exist all the time in 64-bit mode for Xeon if Linux/Windows took 
the expedient and less-buggy route of using a single method versus 
checking whether target addresses are  or  4GB.)


Jeff Frost wrote:
On Tue, 19 Apr 2005, J. Andrew Rogers wrote:
I don't know about 2.5x faster (perhaps on specific types of loads), 
but the reason Opterons rock for database applications is their 
insanely good memory bandwidth and latency that scales much better 
than the Xeon.  Opterons also have a ccNUMA-esque I/O fabric and two 
dedicated on-die memory channels *per processor* -- no shared bus 
there, closer to real UNIX server iron than a glorified PC.

Thanks J!  That's exactly what I was suspecting it might be.  Actually, 
I found an anandtech benchmark that shows the Opteron coming in at close 
to 2.0x performance:

http://www.anandtech.com/linux/showdoc.aspx?i=2163p=2
It's an Opteron 150 (2.4ghz) vs. Xeon 3.6ghz from August.  I wonder if 
the differences are more pronounced with the newer Opterons.

-Jeff
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match
---(end of broadcast)---
TIP 6: Have you searched our list archives?
  http://archives.postgresql.org


Re: [PERFORM] When are index scans used over seq scans?

2005-04-20 Thread Richard van den Berg
John A Meinel wrote:
 I believe the problem is that postgres doesn't recognize how restrictive
 a date-range is unless it uses constants.

And it does when using BETWEEN with int for example? Impressive. :-)

 select blah from du WHERE time between '2004-10-10' and '2004-10-15';
 Will properly use the index, because it realizes it only returns a few
 rows.

Correct, it does.

 Probably you should try to find out the status of multi-table
 selectivity. It was discussed in the last couple of months.

I can't find the posts you are refering to. What is the priciple of
multi-table selectivity?

Your explanation sounds very plausible.. I don't mind changing the
cpu_tuple_cost before running BETWEEN with timestamps, they are easy
enough to spot.

Thanks,

-- 
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna-portal.net
Bazarstraat 44a|  www.trust-factory.com
2518AK The Hague   |  Phone: +31 70 3620684
The Netherlands|  Fax  : +31 70 3603009
---

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Foreign key slows down copy/insert

2005-04-20 Thread Vivek Khera
On Apr 14, 2005, at 7:59 AM, Richard van den Berg wrote:
How do I explain why test cases 2 and 3 do not come close to case 1? 
Am I missing something obvious?
there's cost involved with enforcing the FK: if you're indexes can't be 
used then you're doing a boatload of sequence scans to find and lock 
the referenced rows in the parent tables.

Make sure you have indexes on your FK columns (on *both* tables), and 
that the data type on both tables is the same.

Vivek Khera, Ph.D.
+1-301-869-4449 x806


smime.p7s
Description: S/MIME cryptographic signature


Re: [PERFORM] Spend 7K *WHERE*? WAS Intel SRCS16 SATA raid? and How

2005-04-20 Thread Vivek Khera
On Apr 15, 2005, at 8:10 PM, Ron Mayer wrote:
For example, I didn't see many other $7000 proposals have
have nearly 10GB of ram, or over a dozen CPUs (even counting
the raid controllers), or over a half a terrabyte of storage ,
or capable of 5-10 Gbit/sec of network traffic...  The extra
And how much are you spending on the switch that will carry 10Gb/sec 
traffic?

capacity would allow me to have redundancy that would somewhat
make up for the flakier hardware, no raid, etc.
it would work for some class of applications which are pretty much 
read-only.  and don't forget to factor in the overhead of the 
replication...

Thoughts?  Over the next couple months I'll be evaluating
a cluster of 4 systems almost exactly as I described (but
with cheaper dual hard drives in each system), for a GIS
system that does lend itself well to application-level
partitioning.
I'd go with fewer bigger boxes with RAID so i can sleep better at night 
:-)

Vivek Khera, Ph.D.
+1-301-869-4449 x806


smime.p7s
Description: S/MIME cryptographic signature


Re: [PERFORM] What to do with 6 disks?

2005-04-20 Thread Vivek Khera
On Apr 19, 2005, at 11:07 PM, Josh Berkus wrote:
RAID1 2 disks OS, pg_xlog
RAID 1+0 4 disks pgdata
This is my preferred setup, but I do it with 6 disks on RAID10 for 
data, and since I have craploads of disk space I set checkpoint 
segments to 256 (and checkpoint timeout to 5 minutes)


Vivek Khera, Ph.D.
+1-301-869-4449 x806


smime.p7s
Description: S/MIME cryptographic signature


Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

2005-04-20 Thread Vivek Khera
On Apr 20, 2005, at 12:40 AM, Jeff Frost wrote:
I've seen quite a few folks touting the Opteron as 2.5x faster with 
postgres than a Xeon box.  What makes the Opteron so quick?  Is it 
that Postgres really prefers to run in 64-bit mode?

The I/O path on the opterons seems to be much faster, and having 64-bit 
all the way to the disk controller helps...  just be sure to run a 
64-bit version of your OS.

Vivek Khera, Ph.D.
+1-301-869-4449 x806


smime.p7s
Description: S/MIME cryptographic signature


Re: [PERFORM] When are index scans used over seq scans?

2005-04-20 Thread John A Meinel
Richard van den Berg wrote:
John A Meinel wrote:

I believe the problem is that postgres doesn't recognize how restrictive
a date-range is unless it uses constants.

And it does when using BETWEEN with int for example? Impressive. :-)

select blah from du WHERE time between '2004-10-10' and '2004-10-15';
Will properly use the index, because it realizes it only returns a few
rows.

Correct, it does.

Probably you should try to find out the status of multi-table
selectivity. It was discussed in the last couple of months.

I can't find the posts you are refering to. What is the priciple of
multi-table selectivity?
Your explanation sounds very plausible.. I don't mind changing the
cpu_tuple_cost before running BETWEEN with timestamps, they are easy
enough to spot.
Thanks,

Well, there was a thread titled date - range
There is also recognizing range constraints which started with plan
for relatively simple query seems to be very inefficient.
Sorry that I gave you poor search terms.
Anyway, date - range gives an interesting workaround. Basically you
store date ranges with a different structure, which allows fast index
lookups.
The other threads are just discussing the possibility of improving the
planner so that it recognizes WHERE a  b AND a  c, is generally more
restrictive.
There was a discussion about how to estimate selectivity, but I think it
mostly boils down that except for pathological cases, a  b AND a  c is
always more restrictive than just a  b, or  a  c.
Some of it may be also be found in pgsql-hackers, rather than
pgsql-performance, but I'm not subscribed to -hackers, so most of it
should be in -performance.
John
=:-
caveat, I'm not a developer, I just read a lot of the list.


signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

2005-04-20 Thread Anjan Dave
In terms of vendor specific models -

Does anyone have any good/bad experiences/recommendations for a 4-way
Opteron from Sun (v40z, 6 internal drives) or HP (DL585 5 internal
drives) models?

This is in comparison with the new Dell 6850 (it has PCIexpress, faster
FSB 667MHz, which doesn't match up with AMD's total IO bandwidth, but
much better than previous 6650s).

Thanks,
Anjan


-Original Message-
From: William Yu [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 20, 2005 11:10 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

I posted this link a few months ago and there was some surprise over the

difference in postgresql compared to other DBs. (Not much surprise in 
Opteron stomping on Xeon in pgsql as most people here have had that 
experience -- the surprise was in how much smaller the difference was in

other DBs.) If it was across the board +100% in MS-SQL, MySQL, etc -- 
you can chalk in up to overall better CPU architecture. Most of the time

though, the numbers I've seen show +0-30% for [insert DB here] and a 
huge whopping + for pgsql. Why the pronounced preference for 
postgresql, I'm not sure if it was explained fully.

BTW, the Anandtech test compares single CPU systems w/ 1GB of RAM. Go to

dual/quad and SMP Xeon will suffer even more since it has to share a 
fixed amount of FSB/memory bandwidth amongst all CPUs. Xeons also seem 
to suffer more from context-switch storms. Go  4GB of RAM and the Xeon 
suffers another hit due to the lack of a 64-bit IOMMU. Devices cannot 
map to addresses  4GB which means the OS has to do extra work in 
copying data from/to  4GB anytime you have IO. (Although this penalty 
might exist all the time in 64-bit mode for Xeon if Linux/Windows took 
the expedient and less-buggy route of using a single method versus 
checking whether target addresses are  or  4GB.)



Jeff Frost wrote:
 On Tue, 19 Apr 2005, J. Andrew Rogers wrote:
 
 I don't know about 2.5x faster (perhaps on specific types of loads), 
 but the reason Opterons rock for database applications is their 
 insanely good memory bandwidth and latency that scales much better 
 than the Xeon.  Opterons also have a ccNUMA-esque I/O fabric and two 
 dedicated on-die memory channels *per processor* -- no shared bus 
 there, closer to real UNIX server iron than a glorified PC.
 
 
 Thanks J!  That's exactly what I was suspecting it might be.
Actually, 
 I found an anandtech benchmark that shows the Opteron coming in at
close 
 to 2.0x performance:
 
 http://www.anandtech.com/linux/showdoc.aspx?i=2163p=2
 
 It's an Opteron 150 (2.4ghz) vs. Xeon 3.6ghz from August.  I wonder if

 the differences are more pronounced with the newer Opterons.
 
 -Jeff
 
 ---(end of
broadcast)---
 TIP 9: the planner will ignore your desire to choose an index scan if
your
  joining column's datatypes do not match
 

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

2005-04-20 Thread Bruce Momjian
Anjan Dave wrote:
 In terms of vendor specific models -
 
 Does anyone have any good/bad experiences/recommendations for a 4-way
 Opteron from Sun (v40z, 6 internal drives) or HP (DL585 5 internal
 drives) models?
 
 This is in comparison with the new Dell 6850 (it has PCIexpress, faster
 FSB 667MHz, which doesn't match up with AMD's total IO bandwidth, but
 much better than previous 6650s).

Dell cuts too many corners to be a good server.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  pgman@candle.pha.pa.us   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] When are index scans used over seq scans?

2005-04-20 Thread Richard van den Berg
Tom Lane wrote:
 The explain shows no such thing.  What is the *actual* runtime of
 each plan per EXPLAIN ANALYZE, please?

I took a simplified version of the problem (the actual query that took 6
hours joins 3 tables). With cpu_tuple_cost = 0.1:

 Nested Loop  (cost=0.00..667700310.42 rows=1035480320 width=97) (actual
time=31.468..42629.629 rows=6171334 loops=1)
   -  Seq Scan on sessions us  (cost=0.00..125756.60 rows=924536
width=105) (actual time=31.366..3293.523 rows=924536 loops=1)
   -  Index Scan using ix_du_ts on duration du  (cost=0.00..604.46
rows=1120 width=8) (actual time=0.004..0.011 rows=7 loops=924536)
 Index Cond: ((outer.starttimetrunc = du.ts) AND
(outer.finishtimetrunc = du.ts))
 Total runtime: 44337.937 ms

The explain analyze for cpu_tuple_cost = 0.01 is running now. If it
takes hours, I'll send it to the list tomorrow.

-- 
Richard van den Berg, CISSP
---
Trust Factory B.V. | www.dna-portal.net
Bazarstraat 44a|  www.trust-factory.com
2518AK The Hague   |  Phone: +31 70 3620684
The Netherlands|  Fax  : +31 70 3603009
---

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

2005-04-20 Thread Christian Sander Røsnes
On Wednesday 20 April 2005 17:50, Bruce Momjian wrote:
 Anjan Dave wrote:
  In terms of vendor specific models -
 
  Does anyone have any good/bad experiences/recommendations for a 4-way
  Opteron from Sun (v40z, 6 internal drives) or HP (DL585 5 internal
  drives) models?
 
  This is in comparison with the new Dell 6850 (it has PCIexpress, faster
  FSB 667MHz, which doesn't match up with AMD's total IO bandwidth, but
  much better than previous 6650s).

 Dell cuts too many corners to be a good server.

Hi

Which corners do Dell cut compared to the competition ?

Thanks

Christian

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [PERFORM] How to improve db performance with $7K?

2005-04-20 Thread Mohan, Ross
kewl. 

Well, 8k request out of PG kernel might turn into an XKb request at
disk/OS level, but duly noted. 

Did you scan the code for this, or are you pulling this recollection from
the cognitive archives? :-)



-Original Message-
From: Jim C. Nasby [mailto:[EMAIL PROTECTED] 
Sent: Tuesday, April 19, 2005 8:12 PM
To: Mohan, Ross
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?


On Mon, Apr 18, 2005 at 06:41:37PM -, Mohan, Ross wrote:
 Don't you think optimal stripe width would be
 a good question to research the binaries for? I'd
 think that drives the answer, largely.  (uh oh, pun alert)
 
 EG, oracle issues IO requests (this may have changed _just_
 recently) in 64KB chunks, regardless of what you ask for. 
 So when I did my striping (many moons ago, when the Earth 
 was young...) I did it in 128KB widths, and set the oracle 
 multiblock read count according. For oracle, any stripe size
 under 64KB=stupid, anything much over 128K/258K=wasteful. 
 
 I am eager to find out how PG handles all this.

AFAIK PostgreSQL requests data one database page at a time (normally 8k). Of 
course the OS might do something different.
-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: Where do you want to go today?
Linux: Where do you want to go tomorrow?
FreeBSD: Are you guys coming, or what?

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

2005-04-20 Thread Anjan Dave
There have been some discussions on this list and others in general about 
Dell's version of RAID cards, and server support, mainly linux support.

Before I venture into having another vendor in the shop I want to know if there 
are any dos/don't's about 4-way Opteron offerings from Sun and HP.

Don't want to put the topic on a different tangent, but I would be interested 
in the discussion of AMD Vs. XEON in terms of actual products available today.

Thanks,
Anjan

-Original Message-
From: Christian Sander Røsnes [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 20, 2005 12:14 PM
To: Bruce Momjian
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

On Wednesday 20 April 2005 17:50, Bruce Momjian wrote:
 Anjan Dave wrote:
  In terms of vendor specific models -
 
  Does anyone have any good/bad experiences/recommendations for a 4-way
  Opteron from Sun (v40z, 6 internal drives) or HP (DL585 5 internal
  drives) models?
 
  This is in comparison with the new Dell 6850 (it has PCIexpress, faster
  FSB 667MHz, which doesn't match up with AMD's total IO bandwidth, but
  much better than previous 6650s).

 Dell cuts too many corners to be a good server.

Hi

Which corners do Dell cut compared to the competition ?

Thanks

Christian

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

2005-04-20 Thread Josh Berkus
Anjan,

 Does anyone have any good/bad experiences/recommendations for a 4-way
 Opteron from Sun (v40z, 6 internal drives) or HP (DL585 5 internal
 drives) models?

Last I checked, the v40z only takes 5 drives, unless you yank the cd-rom and 
get an extra disk tray.   That's the main defect of the model, the second 
being its truly phenominal noise level.   Other than that (and price) and 
excellent Opteron machine.

The HPs are at root pretty good machines -- and take 6 drives, so I expect 
you're mixed up there.  However, they use HP's proprietary RAID controller 
which is seriously defective.   So you need to factor replacing the RAID 
controller into the cost.

 This is in comparison with the new Dell 6850 (it has PCIexpress, faster
 FSB 667MHz, which doesn't match up with AMD's total IO bandwidth, but
 much better than previous 6650s).

Yes, but you can still expect the 6650 to have 1/2 the performance ... or 
less ... of the above-name models.   It:
1) is Xeon 32-bit
2) uses a cheap northbridge which makes the Xeon's cache contention even worse
3) depending on the model and options, may ship with a cheap Adaptec raid card 
instead of an LSI or other good card

If all you *need* is 1/2 the performance of an Opteron box, and you can get a 
good deal, then go for it.  But don't be under the illusion that Dell is 
competitive with Sun, IBM, HP, Penguin or Microway on servers.


-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] How to improve db performance with $7K?

2005-04-20 Thread Mohan, Ross
right, the oracle system uses a second low latency bus to 
manage locking information (at the block level) via a
distributed lock manager.  (but this is slightly different
albeit related to a clustered file system and OS-managed
locking, eg) 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Dawid Kuroczko
Sent: Wednesday, April 20, 2005 4:56 AM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?


On 4/19/05, Mohan, Ross [EMAIL PROTECTED] wrote:
 Clustered file systems is the first/best example that
 comes to mind. Host A and Host B can both request from diskfarm, eg.

Something like a Global File System?

http://www.redhat.com/software/rha/gfs/

(I believe some other company did develop it some time in the past; hmm, 
probably the guys doing LVM stuff?).

Anyway the idea is that two machines have same filesystem mounted and they 
share it. The locking I believe is handled by communication between computers 
using host to host SCSI commands.

I never used it, I've only heard about it from a friend who used to work with 
it in CERN.

   Regards,
   Dawid

---(end of broadcast)---
TIP 8: explain analyze is your friend

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] How to improve db performance with $7K?

2005-04-20 Thread Alex Turner
I wonder if thats something to think about adding to Postgresql? A
setting for multiblock read count like Oracle (Although having said
that I believe that Oracle natively caches pages much more
aggressively that postgresql, which allows the OS to do the file
caching).

Alex Turner
netEconomist

P.S. Oracle changed this with 9i, you can change the Database block
size on a tablespace by tablespace bassis making it smaller for OLTP
tablespaces and larger for Warehousing tablespaces (at least I think
it's on a tablespace, might be on a whole DB).

On 4/19/05, Jim C. Nasby [EMAIL PROTECTED] wrote:
 On Mon, Apr 18, 2005 at 06:41:37PM -, Mohan, Ross wrote:
  Don't you think optimal stripe width would be
  a good question to research the binaries for? I'd
  think that drives the answer, largely.  (uh oh, pun alert)
 
  EG, oracle issues IO requests (this may have changed _just_
  recently) in 64KB chunks, regardless of what you ask for.
  So when I did my striping (many moons ago, when the Earth
  was young...) I did it in 128KB widths, and set the oracle
  multiblock read count according. For oracle, any stripe size
  under 64KB=stupid, anything much over 128K/258K=wasteful.
 
  I am eager to find out how PG handles all this.
 
 AFAIK PostgreSQL requests data one database page at a time (normally
 8k). Of course the OS might do something different.
 --
 Jim C. Nasby, Database Consultant   [EMAIL PROTECTED]
 Give your computer some brain candy! www.distributed.net Team #1828
 
 Windows: Where do you want to go today?
 Linux: Where do you want to go tomorrow?
 FreeBSD: Are you guys coming, or what?
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Bad n_distinct estimation; hacks suggested?

2005-04-20 Thread Josh Berkus
Tom,

Any thoughts?   This is really messing up query execution all across the 
database ...

--Josh

 Here is the stats = 100 version.   Notice that n_distinct has gone down.

  schemaname |  tablename   |  attname   | null_frac | avg_width |
 n_distinct |   most_common_vals

 |most_common_freqs
 | histogram_bounds  |

 correlation

---+- public | web_site_activity_fa |
 session_id | 0 | 8 | 96107 |
 {4393922,6049228,6026260,4394034,60341,4393810,2562999,2573850,3006299,4705
488,2561499,4705258,3007378,4705490,60327,60352,2560950,2567640,2569852,3006
604,4394329,2570739,2406633,2407292,3006356,4393603,4394121,6449083,2565815,
4387881,2406770,2407081,2564340,3007328,2406578,2407295,2562813,2567603,4387
835,71014,2566253,2566900,6103079,2289424,2407597,2567627,2568333,3457448,23
450,23670,60743,70739,2406818,2406852,2407511,2562816,3007446,6306095,60506,
71902,591543,1169136,1447077,2285047,2406830,2573964,6222758,61393,70955,709
86,71207,71530,262368,2289213,2406899,2567361,2775952,3006824,4387864,623982
5,6244853,6422152,1739,58600,179293,278473,488407,1896390,2286976,2407020,25
46720,2677019,2984333,3006133,3007497,3310286,3631413,3801909,4366116,438802
5}

 {0.0017,0.00146667,0.0013,0.0011,0.00093,0.0009,0.0008,0.0008,0.000
73,0.00073,0.0007,0.00063,0.0006,0.0006,0.00057,0.00057,
0.00057,0.00057,0.00057,0.00057,0.00057,0.00053,0.00
05,0.0005,0.0005,0.0005,0.0005,0.0005,0.00047,0.00047,0.00043,0.
00043,0.00043,0.00043,0.0004,0.0004,0.0004,0.0004,0.0004,0.00036
6667,0.00037,0.00037,0.00037,0.00033,0.00033,0.00033
,0.00033,0.00033,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.0003,0.
0003,0.0003,0.0003,0.00027,0.00027,0.00027,0.00027,0.0002666
67,0.00027,0.00027,0.00027,0.00027,0.00023,0.00023,0
.00023,0.00023,0.00023,0.00023,0.00023,0.00023,0.000
23,0.00023,0.00023,0.00023,0.00023,0.00023,0.0002333
33,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0
002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002,0.0002}

 {230,58907,88648,156764,216759,240405,264601,289047,312630,339947,364452,38
6486,409427,434075,455140,475759,500086,521530,544703,680376,981066,1313419,
1712592,1860151,1882452,1905328,1927504,1948159,1970054,1990408,2014501,2038
573,2062786,2087163,2110129,2132196,2155657,2181058,2204976,2228575,2256229,
2283897,2352453,2407153,2457716,2542081,2572119,2624133,2699592,2771254,2832
224,2908151,2951500,3005088,3032889,3137244,3158685,3179395,3203681,3261587,
3304359,3325577,3566688,3621357,3645094,3718667,3740821,3762386,3783169,3804
593,3826503,3904589,3931012,3957675,4141934,4265118,4288568,4316898,4365625,
4473965,4535752,4559700,4691802,4749478,5977208,6000272,6021416,6045939,6078
912,6111900,6145155,6176422,6206627,6238291,6271270,6303067,6334117,6365200,
6395250,6424719,6888329}

 | 0.41744

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

2005-04-20 Thread Rod Taylor
 The HPs are at root pretty good machines -- and take 6 drives, so I expect 
 you're mixed up there.  However, they use HP's proprietary RAID controller 
 which is seriously defective.   So you need to factor replacing the RAID 
 controller into the cost.

Do you have any additional materials on what is defective with their
raid controllers?
-- 


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

2005-04-20 Thread Mike Rylander
On 4/20/05, Anjan Dave [EMAIL PROTECTED] wrote:
 In terms of vendor specific models -
 
 Does anyone have any good/bad experiences/recommendations for a 4-way
 Opteron from Sun (v40z, 6 internal drives) or HP (DL585 5 internal
 drives) models?

We are going with the 90nm HPs for production.  They feel like
beefier boxes than the Suns, but the Suns cost a LOT less, IIRC. 
We're only using the internal drives for the OS.  PG gets access to a
fibre-channel array, HP StorageWorks 3000.  I _can't wait_ to get this
in.

Our dev box is a 130nm DL585 with 16G of RAM and an HP SCSI array, and
I have absolutely zero complaints.  :)

-- 
Mike Rylander
[EMAIL PROTECTED]
GPLS -- PINES Development
Database Developer
http://open-ils.org

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

2005-04-20 Thread Joel Fradkin
There have been some discussions on this list and others in general about
Dell's version of RAID cards, and server support, mainly linux support.

I was pretty impressed with the Dell guy. He spent the day with me remotely
and went through my system 6650 with powervault. Changed my drives from ext3
to ext2 with no journaling checked all the drivers and such.

I did not see any marked improvement, but I don’t think my issues are
related to the hardware.

I am giving up on postgres and three developers two months of work and
trying MYSQL.

I have posted several items and not got a response (not that I expect folks
to drop everything). I want to thank everyone who has been of help and there
are several.

It just is running way slow on several of my views. I tried them today in
MYSQL and found that the MYSQL was beating out my MSSQL.

On certain items I could get PG to work ok, but it never was faster the
MSSQL. On certain items it is taking several minutes compared to a few
seconds on MYSQL. 

I really like the environment and feel I have learned a lot in the past few
months, but bottom line for me is speed. We bought a 30K Dell 6650 to get
better performance. I chose PG because MSSQL was 70K to license. I believe
the MYSQL will be 250.00 to license for us, but I may choose the 4k platinum
support just to feel safe about having some one to touch base with in the
event of an issue.

Again thanks to everyone who has answered my newb questions and helped me
get it on the 3 spindles and tweek the install. Commandpromt.com was a big
help and if I wanted to budget a bunch more $ and mostly if I was at liberty
to share my database with them they may of helped me get through all the
issues. I am not sure I am walking away feeling real good about postgres,
because it just should not take a rocket scientist to get it to work, and I
used to think I was fairly smart and could figure stuff out and I hate
admitting defeat (especially since we have everything working with postgres
now).


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon

2005-04-20 Thread Josh Berkus
Joel,

 I did not see any marked improvement, but I don’t think my issues are
 related to the hardware.

If you won't believe it, then we certainly can't convince you.  AFAIK your bad 
view is a bad query plan made worse by the Dell's hardware problems.

 I am giving up on postgres and three developers two months of work and
 trying MYSQL.

I'd suggest testing your *whole* application and not just this one query.  And 
remember that you need to test InnoDB tables if you want transactions.


 I have posted several items and not got a response (not that I expect folks
 to drop everything). I want to thank everyone who has been of help and
 there are several.

Hmmm ... I see about 25 responses to some of your posts on this list.   
Including ones by some of our head developers.   That's more than you'd get 
out of a paid MSSQL support contract, I know from experience.

If you want anything more, then you'll need a do-or-die contract with a 
support company. If your frustration is because you can't find this kind of 
help than I completely understand ... I have a waiting list for performance 
contracts myself.  (and, if you hired me the first thing I'd tell you is to 
junk the Dell)

 I really like the environment and feel I have learned a lot in the past few
 months, but bottom line for me is speed. We bought a 30K Dell 6650 to get
 better performance. 

Would have been smart to ask on this list *before* buying the Dell, hey?  Even 
a Google of this mailing list would have been informative.

 I chose PG because MSSQL was 70K to license. I believe 
 the MYSQL will be 250.00 to license for us, but I may choose the 4k
 platinum support just to feel safe about having some one to touch base with
 in the event of an issue.

Hmmm ... you're willing to pay MySQL $4k but expect the PG community to solve 
all your problems with free advice and a couple $100 with CMD?   I sense an 
apples vs. barca loungers comparison here ...

 I am not sure I am walking away feeling real good about
 postgres, because it just should not take a rocket scientist to get it to
 work, and I used to think I was fairly smart and could figure stuff out and
 I hate admitting defeat (especially since we have everything working with
 postgres now).

While I understand your frustration (I've been frustrated more than a few 
times with issues that stump me on Linux, for example) it's extremely unfair 
to lash out at a community that has provided you a lot of free advice because 
the advice hasn't fixed everything yet.  By my reading, you first raised your 
query issue 6 days ago.  6 days is not a lot of time for getting *free* 
troubleshooting help by e-mail. Certainly it's going to take more than 6 days 
to port to MySQL.  

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] How to improve db performance with $7K?

2005-04-20 Thread Alex Turner
Whilst I admire your purist approach, I would say that if it is
beneficial to performance that a kernel understand drive geometry,
then it is worth investigating teaching it how to deal with that!

I was less referrring to the kernel as I was to the controller.

Lets say we invented a new protocol that including the drive telling
the controller how it was layed out at initialization time so that the
controller could make better decisions about re-ordering seeks.  It
would be more cost effective to have that set of electronics just once
in the controller, than 8 times on each drive in an array, which would
yield better performance to cost ratio.  Therefore I would suggest it
is something that should be investigated.  After all, why implemented
TCQ on each drive, if it can be handled more effeciently at the other
end by the controller for less money?!

Alex Turner
netEconomist

On 4/19/05, Dave Held [EMAIL PROTECTED] wrote:
  -Original Message-
  From: Alex Turner [mailto:[EMAIL PROTECTED]
  Sent: Monday, April 18, 2005 5:50 PM
  To: Bruce Momjian
  Cc: Kevin Brown; pgsql-performance@postgresql.org
  Subject: Re: [PERFORM] How to improve db performance with $7K?
 
  Does it really matter at which end of the cable the queueing is done
  (Assuming both ends know as much about drive geometry etc..)?
  [...]
 
 The parenthetical is an assumption I'd rather not make.  If my
 performance depends on my kernel knowing how my drive is laid
 out, I would always be wondering if a new drive is going to
 break any of the kernel's geometry assumptions.  Drive geometry
 doesn't seem like a kernel's business any more than a kernel
 should be able to decode the ccd signal of an optical mouse.
 The kernel should queue requests at a level of abstraction that
 doesn't depend on intimate knowledge of drive geometry, and the
 drive should queue requests on the concrete level where geometry
 matters.  A drive shouldn't guess whether a process is trying to
 read a file sequentially, and a kernel shouldn't guess whether
 sector 30 is contiguous with sector 31 or not.
 
 __
 David B. Held
 Software Engineer/Array Services Group
 200 14th Ave. East,  Sartell, MN 56377
 320.534.3637 320.253.7800 800.752.8129
 
 ---(end of broadcast)---
 TIP 7: don't forget to increase your free space map settings


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] How to improve db performance with $7K?

2005-04-20 Thread Dave Held
 -Original Message-
 From: Alex Turner [mailto:[EMAIL PROTECTED]
 Sent: Wednesday, April 20, 2005 12:04 PM
 To: Dave Held
 Cc: pgsql-performance@postgresql.org
 Subject: Re: [PERFORM] How to improve db performance with $7K?
 
 [...]
 Lets say we invented a new protocol that including the drive telling
 the controller how it was layed out at initialization time so that the
 controller could make better decisions about re-ordering seeks.  It
 would be more cost effective to have that set of electronics just once
 in the controller, than 8 times on each drive in an array, which would
 yield better performance to cost ratio. 

Assuming that a single controller would be able to service 8 drives 
without delays.  The fact that you want the controller to have fairly
intimate knowledge of the drives implies that this is a semi-soft 
solution requiring some fairly fat hardware compared to firmware that is
hard-wired for one drive.  Note that your controller has to be 8x as fast
as the on-board drive firmware.  There's definitely a balance there, and 
it's not entirely clear to me where the break-even point is.

 Therefore I would suggest it is something that should be investigated. 
 After all, why implemented TCQ on each drive, if it can be handled more
 effeciently at the other end by the controller for less money?!

Because it might not cost less. ;)  However, I can see where you might 
want the controller to drive the actual hardware when you have a RAID
setup that requires synchronized seeks, etc.  But in that case, it's 
doing one computation for multiple drives, so there really is a win.

__
David B. Held
Software Engineer/Array Services Group
200 14th Ave. East,  Sartell, MN 56377
320.534.3637 320.253.7800 800.752.8129

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] Opteron vs Xeon (Was: What to do with 6 disks?)

2005-04-20 Thread Greg Stark
Josh Berkus josh@agliodbs.com writes:

 Last I checked, the v40z only takes 5 drives, unless you yank the cd-rom and 
 get an extra disk tray.   That's the main defect of the model, the second 
 being its truly phenominal noise level.   Other than that (and price) and 
 excellent Opteron machine.

Incidentally, Sun sells a bunch of v20z and v40z machines on Ebay as some kind
of marketing strategy. You can pick one up for only a slightly absurd price if
you're happy with the configurations listed there. (And if you're in the US).

-- 
greg


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Sort and index

2005-04-20 Thread Andrei Gaspar
Michael Fuhr wrote:
On Mon, Apr 18, 2005 at 10:44:43AM -0500, Dave Held wrote:
 

I thought that an index can be used for sorting.
I'm a little confused about the following result:
create index OperationsName on Operations(cOperationName);
explain SELECT * FROM Operations ORDER BY cOperationName;
 QUERY PLAN
--
-
Sort  (cost=185.37..189.20 rows=1532 width=498)
  Sort Key: coperationname
  -  Seq Scan on operations  (cost=0.00..104.32 rows=1532 width=498)
(3 rows)
Is this supposed to be so?
 

Since you are fetching the entire table, you are touching all the rows.
If the query were to fetch the rows in index order, it would be seeking
all over the table's tracks.  By fetching in sequence order, it has a
much better chance of fetching rows in a way that minimizes head seeks.
Since disk I/O is generally 10-100x slower than RAM, the in-memory sort 
can be surprisingly slow and still beat indexed disk access.  Of course,
this is only true if the table can fit and be sorted entirely in memory
(which, with 1500 rows, probably can).
   

Out of curiosity, what are the results of the following queries?
(Queries run twice to make sure time differences aren't due to
caching.)
SET enable_seqscan TO on;
SET enable_indexscan TO off;
EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;
EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;
SET enable_seqscan TO off;
SET enable_indexscan TO on;
EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;
EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;
SELECT version();
With 1500 rows of random data, I consistently see better performance
with an index scan (about twice as fast as a sequence scan), and
the planner uses an index scan if it has a choice (i.e., when
enable_seqscan and enable_indexscan are both on).  But my test case
and postgresql.conf settings might be different enough from yours
to account for different behavior.
 

Here is the output from the statements above. I know the times seem too 
small to care, but what triggered my question is the fact that in the 
logs there are a lot of lines like (i replaced the list of 43 fields 
with *).
I use ODBC (8.0.1.1) and to change the application to cache the table 
isn't feasible.

2005-04-19 10:07:05 LOG:  duration: 937.000 ms  statement: PREPARE 
_PLAN35b0068 as SELECT * FROM Operations ORDER BY 
cOperationName;EXECUTE _PLAN35b0068
2005-04-19 10:07:09 LOG:  duration: 1344.000 ms  statement: PREPARE 
_PLAN35b0068 as SELECT * FROM Operations ORDER BY 
cOperationName;EXECUTE _PLAN35b0068
2005-04-19 10:07:15 LOG:  duration: 1031.000 ms  statement: PREPARE 
_PLAN35b0068 as SELECT * FROM Operations ORDER BY 
cOperationName;EXECUTE _PLAN35b0068
2005-04-19 10:07:19 LOG:  duration: 734.000 ms  statement: PREPARE 
_PLAN35b0068 as SELECT * FROM Operations ORDER BY 
cOperationName;EXECUTE _PLAN35b0068

The times reported by explain analyze are so small though, the intervals 
reported in pg_log are more real,

tkp=# SET enable_seqscan TO on;
SET
tkp=# SET enable_indexscan TO off;
SET
tkp=# EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;
 QUERY PLAN
--
Sort  (cost=185.37..189.20 rows=1532 width=498) (actual 
time=235.000..235.000 rows=1532 loops=1)
  Sort Key: coperationname
  -  Seq Scan on operations  (cost=0.00..104.32 rows=1532 width=498) 
(actual time=0.000..124.000 rows=1532 loops=1)
Total runtime: 267.000 ms
(4 rows)

tkp=# EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;
QUERY PLAN

Sort  (cost=185.37..189.20 rows=1532 width=498) (actual 
time=16.000..16.000 rows=1532 loops=1)
  Sort Key: coperationname
  -  Seq Scan on operations  (cost=0.00..104.32 rows=1532 width=498) 
(actual time=0.000..0.000 rows=1532 loops=1)
Total runtime: 31.000 ms
(4 rows)

tkp=#
tkp=# SET enable_seqscan TO off;
SET
tkp=# SET enable_indexscan TO on;
SET
tkp=# EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;
 QUERY PLAN
---
Index Scan using operationsname on operations  (cost=0.00..350.01 
rows=1532 width=498) (actual time=16.000..62.000 rows=1532 loops=1)
Total runtime: 62.000 ms
(2 rows)

tkp=# EXPLAIN ANALYZE SELECT * FROM Operations ORDER BY cOperationName;
 QUERY PLAN

Re: [PERFORM] How to improve db performance with $7K?

2005-04-20 Thread Mohan, Ross
Alex et al., 


I wonder if thats something to think about adding to Postgresql? A setting for 
multiblock read count like Oracle (Although 

||  I would think so, yea. GMTA: I was just having this micro-chat with Mr. Jim 
Nasby. 

having said that I believe that Oracle natively caches pages much more 
aggressively that postgresql, which allows the OS to do the file caching).

||  Yea...and it can rely on what is likely a lot more robust and nuanced 
caching algorithm, but...i don't
know enough (read: anything) about PG's to back that comment up. 


Alex Turner
netEconomist

P.S. Oracle changed this with 9i, you can change the Database block size on a 
tablespace by tablespace bassis making it smaller for OLTP tablespaces and 
larger for Warehousing tablespaces (at least I think it's on a tablespace, 
might be on a whole DB).

||Yes, it's tspace level. 



On 4/19/05, Jim C. Nasby [EMAIL PROTECTED] wrote:
 On Mon, Apr 18, 2005 at 06:41:37PM -, Mohan, Ross wrote:
  Don't you think optimal stripe width would be
  a good question to research the binaries for? I'd
  think that drives the answer, largely.  (uh oh, pun alert)
 
  EG, oracle issues IO requests (this may have changed _just_
  recently) in 64KB chunks, regardless of what you ask for. So when I 
  did my striping (many moons ago, when the Earth was young...) I did 
  it in 128KB widths, and set the oracle multiblock read count 
  according. For oracle, any stripe size under 64KB=stupid, anything 
  much over 128K/258K=wasteful.
 
  I am eager to find out how PG handles all this.
 
 AFAIK PostgreSQL requests data one database page at a time (normally 
 8k). Of course the OS might do something different.
 --
 Jim C. Nasby, Database Consultant   [EMAIL PROTECTED]
 Give your computer some brain candy! www.distributed.net Team #1828
 
 Windows: Where do you want to go today?
 Linux: Where do you want to go tomorrow?
 FreeBSD: Are you guys coming, or what?
 
 ---(end of 
 broadcast)---
 TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


[PERFORM] postgres slowdown question

2005-04-20 Thread Shachindra Agarwal








Dear Postgres Masters:



We are using postgres 7.4 in our java application on RedHat
linux. The Java application connects to Postgres via JDBC. The application goes
through a discovery phase, whereas it adds large amount of data
into postgres. Typically, we are adding about a million records in various
tables. The application also issues multiple queries to the database at the
same time. We do not delete any records during the discovery phase. Both
the java application and the postgres are installed on the same machine. 



At the beginning, the application is able to add in the
order of 100 record per minute. Gradually (after several hours), it slows down
to less than 10 records per minute. At this time, postgres processes take
between 80-99% of CPU. When we reindex the database, the speed bumps up to
about 30 records per minute. Now, postgres server takes between 50-70% CPU.



We have the following in the postgresql.conf :



max_fsm_pages = 50

fsync = false



We certainly can not live with this kind of performance. I
believe postgres should be able to handle much larger datasets but I can not
point my finger as to what are we doing wrong. Can somebody please point me to
the right direction.



With kind regards,



-- Shachindra Agarwal. 








Re: [PERFORM] How to tell what your postgresql server is doing

2005-04-20 Thread Tambet Matiisen
Stats are updated only after transaction ends. In case you have a really
long transaction you need something else. 

To help myself I made a little Perl utility to parse strace output. It
recognizes read/write calls, extracts file handle, finds the file name
using information in /proc filesystem, then uses oid2name utility to
translate file name to PostgreSQL relation name. See attachment.

It works well enough for me, but I didn't take time to polish it.
Basically it works with Linux /proc filesystem layout, expects
PostgreSQL data directory to be /home/postgres/data and oid2name in
/usr/lib/postgresql/bin. Usage is pgtrace pid.

  Tambet

 -Original Message-
 From: Jeff Frost [mailto:[EMAIL PROTECTED] 
 Sent: Wednesday, April 20, 2005 7:45 AM
 To: pgsql-performance@postgresql.org
 Subject: How to tell what your postgresql server is doing
 
 
 Is there a way to look at the stats tables and tell what is 
 jamming up your 
 postgres server the most?  Other than seeing long running 
 queries and watch 
 top, atop, iostat, vmstat in separate xterms...I'm wondering 
 if postgres keeps 
 some stats on what it spends the most time doing or if 
 there's a way to 
 extract that sort of info from other metrics it keeps in the 
 stats table?
 
 Maybe a script which polls the stats table and correlates the 
 info with stats 
 about the system in /proc?
 
 -- 
 Jeff Frost, Owner [EMAIL PROTECTED]
 Frost Consulting, LLC http://www.frostconsultingllc.com/
 Phone: 650-780-7908   FAX: 650-649-1954
 


pgtrace
Description: pgtrace

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] Disk filling, CPU filling, renegade inserts and

2005-04-20 Thread Rod Taylor
 I'm having a pretty serious problem with postgresql's performance.  
 Currently, I have a cron task that is set to restart and vacuumdb -faz 
 every six hours.  If that doesn't happen, the disk goes from 10% full 
 to 95% full within 2 days (and it's a 90GB disk...with the database 
 being a 2MB download after dump), and the CPU goes from running at 
 around a 2% load to a 99+% load right away (the stats look like a 
 square wave).

Are you running frequent queries which use temporary tables?


-- 


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon

2005-04-20 Thread Joel Fradkin
Sorry if you feel I am lashing out at a community.
Just to say it again, I am very appreciative of all the help everyone has
supplied.

I am running on more then just the 4 proc Dell (in fact my tests have been
mostly on desktops).

I have MSSQL running on a 2 proc dell which until my load has increased
(over aprx 2 years) it was just fine. I totally agree that there are better
solutions based on this lists comments, but I have all Dell hardware now and
resist trying different vendors just to suit Postgres. I was under the
impression there were still issues with 64bit postgres and Linux (or at
least were when I purchased). I believed I could make my next aquistion a
opteron based hardware.

Again I am not at all trying to critasize any one, so please except my
apology if I some how came across with that attitude. I am very disappointed
at this point. My views may not be that great (although I am not saying that
either), but they run ok on MSSQL and appear to run ok on MYSQL.

I wish I did understand what I am doing wrong because I do not wish to
revisit engineering our application for MYSQL.

I would of spent more $ with Command, but he does need my data base to help
me and I am not able to do that.

I agree testing the whole app is the only way to see and unfortunately it is
a time consuming bit. I do not have to spend 4k on MYSQL, that is if I want
to have their premium support. I can spend $250.00 a server for the
commercial license if I find the whole app does run well. I just loaded the
data last night and only had time to convert one view this morning. I am
sure it is something I do not understand and not a problem with postgres. I
also am willing to take time to get more knowledgeable, but my time is
running out and I feel honestly stupid.

I have been in the process of converting for over two months and have said
several times these lists are a godsend. 

It was never my intention to make you feel like I was flaming anyone
involved. On the contrary, I feel many have taken time to look at my
questions and given excellent advice. I know I check the archives so
hopefully that time will help others after me. 

I may yet find that MYSQL is not a good fit as well. I have my whole app
converted at this point and find pg works well for a lot of my usage.  

There are some key reporting views that need to retrieve many rows with many
joins that just take too long to pull the data. I told my boss just now that
if I try to de-normalize many of these data sets (like 6 main groups of data
that the reporting may work, but as is many of my web pages are timing out
(these are pages that still work on MSSQL and the 2 proc machine).

Thanks again for all the help and know I truly appreciate what time every
one has spent on my issues.

I may find that revisiting the datasets is a way to make PG work, or as you
mentioned maybe I can get some one with more knowledge to step in locally. I
did ask Tom if he knew of anyone, maybe some one else on the list is aware
of a professional in the Tampa FL area.

Realistically I don't think a 30k$ Dell is a something that needs to be
junked. I am pretty sure if I got MSSQL running on it, it would outperform
my two proc box. I can agree it may not have been the optimal platform. My
decision is not based solely on the performance on the 4 proc box.

Joel Fradkin
 

-Original Message-
From: Josh Berkus [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 20, 2005 1:54 PM
To: Joel Fradkin
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon

Joel,

 I did not see any marked improvement, but I don't think my issues are
 related to the hardware.

If you won't believe it, then we certainly can't convince you.  AFAIK your
bad 
view is a bad query plan made worse by the Dell's hardware problems.

 I am giving up on postgres and three developers two months of work and
 trying MYSQL.

I'd suggest testing your *whole* application and not just this one query.
And 
remember that you need to test InnoDB tables if you want transactions.


 I have posted several items and not got a response (not that I expect
folks
 to drop everything). I want to thank everyone who has been of help and
 there are several.

Hmmm ... I see about 25 responses to some of your posts on this list.   
Including ones by some of our head developers.   That's more than you'd get 
out of a paid MSSQL support contract, I know from experience.

If you want anything more, then you'll need a do-or-die contract with a 
support company. If your frustration is because you can't find this kind of 
help than I completely understand ... I have a waiting list for performance 
contracts myself.  (and, if you hired me the first thing I'd tell you is to 
junk the Dell)

 I really like the environment and feel I have learned a lot in the past
few
 months, but bottom line for me is speed. We bought a 30K Dell 6650 to get
 better performance. 

Would have been smart to ask on this 

Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon

2005-04-20 Thread Josh Berkus
Joel,

 I have MSSQL running on a 2 proc dell which until my load has increased
 (over aprx 2 years) it was just fine. I totally agree that there are better
 solutions based on this lists comments, but I have all Dell hardware now
 and resist trying different vendors just to suit Postgres. I was under the
 impression there were still issues with 64bit postgres and Linux (or at
 least were when I purchased). I believed I could make my next aquistion a
 opteron based hardware.

Yeah, sorry, the Dell stuff is a sore point with me.   You can't imagine the 
number of conversations I have that go like this:
We're having a severe performance problem with PostgreSQL
What hardware/OS are you using?
Dell *650 with RHAS 3.0 

BTW, which Update version is your RHAS?   If you're on Update3, you can grab 
more performance right there by upgrading to Update4.

 Again I am not at all trying to critasize any one, so please except my
 apology if I some how came across with that attitude. I am very
 disappointed at this point. My views may not be that great (although I am
 not saying that either), but they run ok on MSSQL and appear to run ok on
 MYSQL.

Yeah.  I think you'll find a few things that are vice-versa.   For that 
matter, I can point to a number of queries we run better than Oracle, and a 
number we don't.

Your particular query problem seems to stem from some bad estimates.   Can you 
post an EXPLAIN ANALYZE based on all the advice people have given you so far?

 I wish I did understand what I am doing wrong because I do not wish to
 revisit engineering our application for MYSQL.

I can imagine.  

 I would of spent more $ with Command, but he does need my data base to help
 me and I am not able to do that.

Yes.  For that matter, it'll take longer to troubleshoot on this list because 
of your security concerns.

 I agree testing the whole app is the only way to see and unfortunately it
 is a time consuming bit. I do not have to spend 4k on MYSQL, that is if I
 want to have their premium support. I can spend $250.00 a server for the
 commercial license if I find the whole app does run well. I just loaded the
 data last night and only had time to convert one view this morning. I am
 sure it is something I do not understand and not a problem with postgres. I
 also am willing to take time to get more knowledgeable, but my time is
 running out and I feel honestly stupid.

You're not.  You have a real query problem and it will require further 
troubleshooting to solve.  Some of us make a pretty handsome living solving 
these kinds of problems, it take a lot of expert knowledge.

 It was never my intention to make you feel like I was flaming anyone
 involved. On the contrary, I feel many have taken time to look at my
 questions and given excellent advice. I know I check the archives so
 hopefully that time will help others after me.

Well, I overreacted too.   Sorry!

 I may find that revisiting the datasets is a way to make PG work, or as you
 mentioned maybe I can get some one with more knowledge to step in locally.
 I did ask Tom if he knew of anyone, maybe some one else on the list is
 aware of a professional in the Tampa FL area.

Well, Robert Treat is in Florida but I'm pretty sure he's busy full-time.

 Realistically I don't think a 30k$ Dell is a something that needs to be
 junked. I am pretty sure if I got MSSQL running on it, it would outperform
 my two proc box. I can agree it may not have been the optimal platform. My
 decision is not based solely on the performance on the 4 proc box.

Oh, certainly it's too late to buy a Sunfire or eServer instead.   You just 
could have gotten far more bang for the buck with some expert advice, that's 
all.   But don't bother with Dell support any further, they don't really have 
the knowledge to help you.

So ... new EXPLAIN ANALYZE ?

-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon

2005-04-20 Thread Anjan Dave
He is running RHAS4, which is the latest 2.6.x kernel from RH. I believe
it should have done away with the RHAS3.0 Update 3 IO issue.

anjan

-Original Message-
From: Josh Berkus [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 20, 2005 4:23 PM
To: Joel Fradkin
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon

Joel,

 I have MSSQL running on a 2 proc dell which until my load has
increased
 (over aprx 2 years) it was just fine. I totally agree that there are
better
 solutions based on this lists comments, but I have all Dell hardware
now
 and resist trying different vendors just to suit Postgres. I was under
the
 impression there were still issues with 64bit postgres and Linux (or
at
 least were when I purchased). I believed I could make my next
aquistion a
 opteron based hardware.

Yeah, sorry, the Dell stuff is a sore point with me.   You can't imagine
the 
number of conversations I have that go like this:
We're having a severe performance problem with PostgreSQL
What hardware/OS are you using?
Dell *650 with RHAS 3.0 

BTW, which Update version is your RHAS?   If you're on Update3, you can
grab 
more performance right there by upgrading to Update4.

 Again I am not at all trying to critasize any one, so please except my
 apology if I some how came across with that attitude. I am very
 disappointed at this point. My views may not be that great (although I
am
 not saying that either), but they run ok on MSSQL and appear to run ok
on
 MYSQL.

Yeah.  I think you'll find a few things that are vice-versa.   For that 
matter, I can point to a number of queries we run better than Oracle,
and a 
number we don't.

Your particular query problem seems to stem from some bad estimates.
Can you 
post an EXPLAIN ANALYZE based on all the advice people have given you so
far?

 I wish I did understand what I am doing wrong because I do not wish to
 revisit engineering our application for MYSQL.

I can imagine.  

 I would of spent more $ with Command, but he does need my data base to
help
 me and I am not able to do that.

Yes.  For that matter, it'll take longer to troubleshoot on this list
because 
of your security concerns.

 I agree testing the whole app is the only way to see and unfortunately
it
 is a time consuming bit. I do not have to spend 4k on MYSQL, that is
if I
 want to have their premium support. I can spend $250.00 a server for
the
 commercial license if I find the whole app does run well. I just
loaded the
 data last night and only had time to convert one view this morning. I
am
 sure it is something I do not understand and not a problem with
postgres. I
 also am willing to take time to get more knowledgeable, but my time is
 running out and I feel honestly stupid.

You're not.  You have a real query problem and it will require further 
troubleshooting to solve.  Some of us make a pretty handsome living
solving 
these kinds of problems, it take a lot of expert knowledge.

 It was never my intention to make you feel like I was flaming anyone
 involved. On the contrary, I feel many have taken time to look at my
 questions and given excellent advice. I know I check the archives so
 hopefully that time will help others after me.

Well, I overreacted too.   Sorry!

 I may find that revisiting the datasets is a way to make PG work, or
as you
 mentioned maybe I can get some one with more knowledge to step in
locally.
 I did ask Tom if he knew of anyone, maybe some one else on the list is
 aware of a professional in the Tampa FL area.

Well, Robert Treat is in Florida but I'm pretty sure he's busy
full-time.

 Realistically I don't think a 30k$ Dell is a something that needs to
be
 junked. I am pretty sure if I got MSSQL running on it, it would
outperform
 my two proc box. I can agree it may not have been the optimal
platform. My
 decision is not based solely on the performance on the 4 proc box.

Oh, certainly it's too late to buy a Sunfire or eServer instead.   You
just 
could have gotten far more bang for the buck with some expert advice,
that's 
all.   But don't bother with Dell support any further, they don't really
have 
the knowledge to help you.

So ... new EXPLAIN ANALYZE ?

-- 

--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 8: explain analyze is your friend


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] Disk filling, CPU filling, renegade inserts and

2005-04-20 Thread Richard Plotkin
No, I don't think so. I don't think there are any temp table queries 
(and I'll check), but even if there are, site traffic is very low, and 
queries would be very infrequent.

On Apr 20, 2005, at 12:36 PM, Rod Taylor wrote:
I'm having a pretty serious problem with postgresql's performance.
Currently, I have a cron task that is set to restart and vacuumdb -faz
every six hours.  If that doesn't happen, the disk goes from 10% full
to 95% full within 2 days (and it's a 90GB disk...with the database
being a 2MB download after dump), and the CPU goes from running at
around a 2% load to a 99+% load right away (the stats look like a
square wave).
Are you running frequent queries which use temporary tables?
--
---(end of 
broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq


Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon

2005-04-20 Thread John A Meinel
Joel Fradkin wrote:
...
I would of spent more $ with Command, but he does need my data base to help
me and I am not able to do that.

...
What if someone were to write an anonymization script. Something that
changes any of the data of the database, but leaves all of the
relational information. It could turn all strings into some sort of
hashed version, so you don't give out any identifiable information.
It could even modify relational entries, as long as it updated both
ends, and this didn't affect the actual performance at all.
I don't think this would be very hard to write. Especially if you can
give a list of the tables, and what columns need to be modified.
Probably this would generally be a useful script to have for cases like
this where databases are confidential, but need to be tuned by someone else.
Would that be reasonable?
I would think that by renaming columns, hashing the data in the columns,
and renaming tables, most of the proprietary information is removed,
without removing the database information.
John
=:-


signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Disk filling, CPU filling, renegade inserts and deletes?

2005-04-20 Thread Richard Plotkin
As a follow-up, I've found a function that used the following code:
CREATE TEMPORARY TABLE results
(nOrder integer,
page_id integer,
name text)
WITHOUT OIDS
ON COMMIT DROP;
I would assume that the WITHOUT OIDS would be part of the source of 
the problem, so I've commented it out.

On Apr 20, 2005, at 12:36 PM, Rod Taylor wrote:
I'm having a pretty serious problem with postgresql's performance.
Currently, I have a cron task that is set to restart and vacuumdb -faz
every six hours.  If that doesn't happen, the disk goes from 10% full
to 95% full within 2 days (and it's a 90GB disk...with the database
being a 2MB download after dump), and the CPU goes from running at
around a 2% load to a 99+% load right away (the stats look like a
square wave).
Are you running frequent queries which use temporary tables?
--

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [PERFORM] postgres slowdown question

2005-04-20 Thread John A Meinel
Shachindra Agarwal wrote:
Dear Postgres Masters:
We are using postgres 7.4 in our java application on RedHat linux. The 
Java application connects to Postgres via JDBC. The application goes 
through a discovery phase, whereas it adds large amount of data into 
postgres. Typically, we are adding about a million records in various 
tables. The application also issues multiple queries to the database 
at the same time. We do not delete any records during the discovery 
phase. Both the java application and the postgres are installed on the 
same machine.

At the beginning, the application is able to add in the order of 100 
record per minute. Gradually (after several hours), it slows down to 
less than 10 records per minute. At this time, postgres processes take 
between 80-99% of CPU. When we reindex the database, the speed bumps 
up to about 30 records per minute. Now, postgres server takes between 
50-70% CPU.

We have the following in the postgresql.conf :
max_fsm_pages = 50
fsync = false
We certainly can not live with this kind of performance. I believe 
postgres should be able to handle much larger datasets but I can not 
point my finger as to what are we doing wrong. Can somebody please 
point me to the right direction.

With kind regards,
-- Shachindra Agarwal.
A few questions first. How are you loading the data? Are you using 
INSERT or COPY? Are you using a transaction, or are you autocommitting 
each row?

You really need a transaction, and preferably use COPY. Both can help 
performance a lot. (In some of the tests, single row inserts can be 
10-100x slower than doing it in bulk.)

Also, it sounds like you have a foreign key issue. That as things fill 
up, the foreign key reference checks are slowing you down.
Are you using ANALYZE as you go? A lot of times when you only have 1000 
rows a sequential scan is faster than using an index, and if you don't 
inform postgres that you have more rows, it might still use the old seqscan.

There are other possibilities, but it would be nice to know about your 
table layout, and possibly an EXPLAIN ANALYZE of the inserts that are 
going slow.

John
=:-
PS I don't know if JDBC supports COPY, but it certainly should support 
transactions.




signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Disk filling, CPU filling, renegade inserts and deletes?

2005-04-20 Thread Tom Lane
Richard Plotkin [EMAIL PROTECTED] writes:
 I'm having a pretty serious problem with postgresql's performance.  
 Currently, I have a cron task that is set to restart and vacuumdb -faz 
 every six hours.  If that doesn't happen, the disk goes from 10% full 
 to 95% full within 2 days (and it's a 90GB disk...with the database 
 being a 2MB download after dump), and the CPU goes from running at 
 around a 2% load to a 99+% load right away (the stats look like a 
 square wave).

Q: what have you got the FSM parameters set to?

Q: what exactly is bloating?  Without knowing which tables or indexes
are growing, it's hard to speculate about the exact causes.  Use du and
oid2name, or look at pg_class.relpages after a plain VACUUM.

It's likely that the real answer is you need to vacuum more often
than every six hours, but I'm trying not to jump to conclusions.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon

2005-04-20 Thread Vivek Khera
On Apr 20, 2005, at 4:22 PM, Josh Berkus wrote:
Realistically I don't think a 30k$ Dell is a something that needs to 
be
junked. I am pretty sure if I got MSSQL running on it, it would 
outperform
my two proc box. I can agree it may not have been the optimal 
platform. My
decision is not based solely on the performance on the 4 proc box.
Oh, certainly it's too late to buy a Sunfire or eServer instead.   You 
just
could have gotten far more bang for the buck with some expert advice, 
that's
all.   But don't bother with Dell support any further, they don't 
really have
the knowledge to help you.

FWIW, I have a $20k Dell box (PE2650 with 14-disk external PowerVault 
RAID enclosure) which I'm phasing out for a dual opteron box because it 
can't handle the load.  It will be re-purposed as a backup system.  
Damn waste of money, but complaining customers can cost more...

Trust me, it is likely your Dell hardware, as moving to the Opteron 
system has improved performance tremendously with fewer disks.  Same 
amount of RAM and other basic configurations.  Both have LSI based RAID 
cards, even.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [PERFORM] Disk filling, CPU filling, renegade inserts and deletes?

2005-04-20 Thread Richard Plotkin
Hi Tom,
Q: what have you got the FSM parameters set to?
Here's from postgresql.conf -- FSM at default settings.
# - Memory -
shared_buffers = 30400  # min 16, at least max_connections*2, 
8KB each
work_mem = 32168# min 64, size in KB
#maintenance_work_mem = 16384   # min 1024, size in KB
#max_stack_depth = 2048 # min 100, size in KB

# - Free Space Map -
#max_fsm_pages = 2  # min max_fsm_relations*16, 6 bytes each
#max_fsm_relations = 1000   # min 100, ~50 bytes each
# - Kernel Resource Usage -
max_files_per_process = 750 #1000   # min 25
#preload_libraries = ''

Q: what exactly is bloating?  Without knowing which tables or indexes
are growing, it's hard to speculate about the exact causes.  Use du and
oid2name, or look at pg_class.relpages after a plain VACUUM.
This I do not know.  I've disabled the cron jobs and will let the 
system bloat, then I will gather statistics (I'll give it 12-24 hours).

It's likely that the real answer is you need to vacuum more often
than every six hours, but I'm trying not to jump to conclusions.
That could be it, except that I would expect the problem to then look 
more like a gradual increase in CPU usage and a gradual increase in use 
of disk space.  Mine could be an invalid assumption, but the system 
here looks like it goes from no problem to 100% problem within a 
minute.

Thanks again!
Richard
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
 joining column's datatypes do not match


Re: [PERFORM] Slow copy with little CPU/disk usage

2005-04-20 Thread Jim C. Nasby
No, this is a single process. And there's known issues with context
storms on Xeons, so that might be what you're seeing.

On Tue, Apr 19, 2005 at 09:37:21PM -0700, Mischa Sandberg wrote:
 Quoting Tom Lane [EMAIL PROTECTED]: 
  
  Jim C. Nasby [EMAIL PROTECTED] writes: 
   A friend of mine has an application where he's copying in 4000 rows at a 
   time into a table that has about 4M rows. Each row is 40-50 bytes. This 
   is taking 25 seconds on a dual PIII-1GHz with 1G of RAM and a 2 disk 
   SATA mirror, running FBSD 4.10-stable. There's one index on the table. 
   
  If there's no hidden costs such as foreign key checks, that does seem 
  pretty dang slow. 
   
   What's really odd is that neither the CPU or the disk are being 
   hammered. The box appears to be pretty idle; the postgresql proces is 
   using 4-5% CPU. 
 --  
 This sounds EXACTLY like my problem, if you make the box to a Xeon 2.4GHz, 
 2GB 
 RAM ... with two SCSI drives (xlog and base); loading 10K rows of about 200 
 bytes each; takes about 20 secs at the best, and much longer at the worst. By 
 any chance does your friend have several client machines/processes trying to 
 mass-load rows at the same time? Or at least some other processes updating 
 that table in a bulkish way? What I get is low diskio, low cpu, even low 
 context-switches ... and I'm betting he should take a look at pg_locks. For 
 my 
 own problem, I gather that an exclusive lock is necessary while updating 
 indexes and heap, and the multiple processes doing the update can make that 
 pathological. 
  
 Anyway, have your friend check pg_locks. 
  
  
 Dreams come true, not free. -- S.Sondheim, ITW 
 

-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: Where do you want to go today?
Linux: Where do you want to go tomorrow?
FreeBSD: Are you guys coming, or what?

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Slow copy with little CPU/disk usage

2005-04-20 Thread Jim C. Nasby
No, he's using either COPY or \COPY.

On Wed, Apr 20, 2005 at 12:34:27AM -0400, Greg Stark wrote:
 Jim C. Nasby [EMAIL PROTECTED] writes:
 
  What's really odd is that neither the CPU or the disk are being
  hammered. The box appears to be pretty idle; the postgresql proces is
  using 4-5% CPU.
 
 Is he committing every row? In that case you would see fairly low i/o
 bandwidth usage because most of the time is being spent seeking and waiting
 for rotational latency.
 
 -- 
 greg
 

-- 
Jim C. Nasby, Database Consultant   [EMAIL PROTECTED] 
Give your computer some brain candy! www.distributed.net Team #1828

Windows: Where do you want to go today?
Linux: Where do you want to go tomorrow?
FreeBSD: Are you guys coming, or what?

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] postgres slowdown question

2005-04-20 Thread Shachindra Agarwal
Thanks for the note. Please see my responses below:

-Original Message-
From: John A Meinel [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 20, 2005 3:48 PM
To: Shachindra Agarwal
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] postgres slowdown question

Shachindra Agarwal wrote:

 Dear Postgres Masters:

 We are using postgres 7.4 in our java application on RedHat linux. The

 Java application connects to Postgres via JDBC. The application goes 
 through a 'discovery' phase, whereas it adds large amount of data into

 postgres. Typically, we are adding about a million records in various 
 tables. The application also issues multiple queries to the database 
 at the same time. We do not delete any records during the discovery 
 phase. Both the java application and the postgres are installed on the

 same machine.

 At the beginning, the application is able to add in the order of 100 
 record per minute. Gradually (after several hours), it slows down to 
 less than 10 records per minute. At this time, postgres processes take

 between 80-99% of CPU. When we reindex the database, the speed bumps 
 up to about 30 records per minute. Now, postgres server takes between 
 50-70% CPU.

 We have the following in the postgresql.conf :

 max_fsm_pages = 50

 fsync = false

 We certainly can not live with this kind of performance. I believe 
 postgres should be able to handle much larger datasets but I can not 
 point my finger as to what are we doing wrong. Can somebody please 
 point me to the right direction.

 With kind regards,

 -- Shachindra Agarwal.

A few questions first. How are you loading the data? Are you using 
INSERT or COPY? Are you using a transaction, or are you autocommitting 
each row?

You really need a transaction, and preferably use COPY. Both can help 
performance a lot. (In some of the tests, single row inserts can be 
10-100x slower than doing it in bulk.)

 We are using JDBC which supports 'inserts' and 'transactions'. We are
using both. The business logic adds one business object at a time. Each
object is added within its own transaction. Each object add results in 5
records in various tables in the the database. So, a commit is performed
after every 5 inserts.

Also, it sounds like you have a foreign key issue. That as things fill 
up, the foreign key reference checks are slowing you down.
Are you using ANALYZE as you go? A lot of times when you only have 1000

rows a sequential scan is faster than using an index, and if you don't 
inform postgres that you have more rows, it might still use the old
seqscan.

 This could be the issue. I will start 'analyze' in a cron job. I will
update you with the results.

There are other possibilities, but it would be nice to know about your 
table layout, and possibly an EXPLAIN ANALYZE of the inserts that are 
going slow.

John
=:-

PS I don't know if JDBC supports COPY, but it certainly should support 
transactions.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] postgres slowdown question

2005-04-20 Thread John A Meinel
Shachindra Agarwal wrote:
Thanks for the note. Please see my responses below:

...

We are using JDBC which supports 'inserts' and 'transactions'. We are

using both. The business logic adds one business object at a time. Each
object is added within its own transaction. Each object add results in 5
records in various tables in the the database. So, a commit is performed
after every 5 inserts.

Well, 5 inserts per commit is pretty low. It would be nice to see more
like 100 inserts per commit. Would it be possible during the discovery
phase to put the begin/commit logic a little bit higher?
Remember, each COMMIT requires at least one fsync. (I realize you have
fsync off for now). But commit is pretty expensive.
Also, it sounds like you have a foreign key issue. That as things fill
up, the foreign key reference checks are slowing you down.
Are you using ANALYZE as you go? A lot of times when you only have 1000
rows a sequential scan is faster than using an index, and if you don't
inform postgres that you have more rows, it might still use the old
seqscan.

This could be the issue. I will start 'analyze' in a cron job. I will

update you with the results.
There are other possibilities, but it would be nice to know about your
table layout, and possibly an EXPLAIN ANALYZE of the inserts that are
going slow.
John
=:-
PS I don't know if JDBC supports COPY, but it certainly should support
transactions.

Let us know if ANALYZE helps. If you are not deleting or updating
anything, you probably don't need to do VACUUM ANALYZE, but you might
think about it. It is a little more expensive since it has to go to
every tuple, rather than just a random sampling.
John
=:-


signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon

2005-04-20 Thread Joel Fradkin
I did think of something similar just loading the data tables with junk
records and I may visit that idea with Josh.

I did just do some comparisons on timing of a plain select * from tbl where
indexed column = x and it was considerably slower then both MSSQL and MYSQL,
so I am still a bit confused. This still might be configuration issue (I ran
on my 2gig desktop and the 8 gig Linux box comparisons were all ran on the
same machines as far MSSQL, MYSQL, and Postgres.
I turned off postgres when running MYSQL and turned off MYSQL when running
postgres, MSSQL had one of the two running while I tested it.

For the 360,000 records returned MYSQL did it in 40 seconds first run and 17
seconds second run.

MSSQL did it in 56 seconds first run and 16 seconds second run.

Postgres was on the second run
Total query runtime: 17109 ms.
Data retrieval runtime: 72188 ms.
331640 rows retrieved.

So like 89 on the second run.
The first run was 147 secs all told.

These are all on my 2 meg desktop running XP.
I can post the config. I noticed the postgres was using 70% of the cpu while
MSSQL was 100%.

Joel Fradkin
 

I would of spent more $ with Command, but he does need my data base to help
me and I am not able to do that.


...

What if someone were to write an anonymization script. Something that
changes any of the data of the database, but leaves all of the
relational information. It could turn all strings into some sort of
hashed version, so you don't give out any identifiable information.
It could even modify relational entries, as long as it updated both
ends, and this didn't affect the actual performance at all.

I don't think this would be very hard to write. Especially if you can
give a list of the tables, and what columns need to be modified.

Probably this would generally be a useful script to have for cases like
this where databases are confidential, but need to be tuned by someone else.

Would that be reasonable?
I would think that by renaming columns, hashing the data in the columns,
and renaming tables, most of the proprietary information is removed,
without removing the database information.

John
=:-



---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon

2005-04-20 Thread John A Meinel
Joel Fradkin wrote:
I did think of something similar just loading the data tables with junk
records and I may visit that idea with Josh.
I did just do some comparisons on timing of a plain select * from tbl where
indexed column = x and it was considerably slower then both MSSQL and MYSQL,
so I am still a bit confused. This still might be configuration issue (I ran
on my 2gig desktop and the 8 gig Linux box comparisons were all ran on the
same machines as far MSSQL, MYSQL, and Postgres.
I turned off postgres when running MYSQL and turned off MYSQL when running
postgres, MSSQL had one of the two running while I tested it.
For the 360,000 records returned MYSQL did it in 40 seconds first run and 17
seconds second run.
MSSQL did it in 56 seconds first run and 16 seconds second run.
Postgres was on the second run
Total query runtime: 17109 ms.
Data retrieval runtime: 72188 ms.
331640 rows retrieved.
So like 89 on the second run.
The first run was 147 secs all told.
These are all on my 2 meg desktop running XP.
I can post the config. I noticed the postgres was using 70% of the cpu while
MSSQL was 100%.
Joel Fradkin

Why is MYSQL returning 360,000 rows, while Postgres is only returning
330,000? This may not be important at all, though.
I also assume you are selecting from a plain table, not a view.
I suppose knowing your work_mem, and shared_buffers settings would be
useful.
How were you measuring data retrieval time? And how does this compare
to what you were measuring on the other machines? It might be possible
that what you are really measuring is just the time it takes psql to
load up all the data into memory, and then print it out. And since psql
defaults to measuring entry lengths for each column, this may not be
truly comparable.
It *looks* like it only takes 18s for postgres to get the data, but then
it is taking 72s to transfer the data to you. That would be network
latency, or something like that, not database latency.
And I would say that 18s is very close to 16 or 17 seconds.
I don't know what commands you were issuing, or how you measured,
though. You might be using some other interface (like ODBC), which I
can't say too much about.
John
=:-



signature.asc
Description: OpenPGP digital signature


Re: [PERFORM] immutable functions vs. join for lookups ?

2005-04-20 Thread Christopher Kings-Lynne
Yay!  You're right!  I wonder why have I forgotten about LANGUAGE SQL. :)
It's 30 seconds vs 5 seconds for CASE ... END insisde PLpgsql vs CASE...END
LANGUAGE SQL. :)  I.e. its almost the same as in-place entered SQL.
  Regards,
  Dawid
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] Opteron vs Xeon

2005-04-20 Thread Christopher Browne
Quoth [EMAIL PROTECTED] (Christian Sander Røsnes):
 On Wednesday 20 April 2005 17:50, Bruce Momjian wrote:
 Anjan Dave wrote:
  In terms of vendor specific models -
 
  Does anyone have any good/bad experiences/recommendations for a
  4-way Opteron from Sun (v40z, 6 internal drives) or HP (DL585 5
  internal drives) models?
 
  This is in comparison with the new Dell 6850 (it has PCIexpress,
  faster FSB 667MHz, which doesn't match up with AMD's total IO
  bandwidth, but much better than previous 6650s).

 Dell cuts too many corners to be a good server.

 Hi

 Which corners do Dell cut compared to the competition ?

They seem to be buying the cheapest components of the week such that
they need to customize BIOSes to make them work as opposed to getting
the Grade A stuff that works well out of the box.

We got a bunch of quad-Xeon boxes in; the MegaRAID controllers took
plenty o' revisits from Dell folk before they got sorta stable.  

Dell replaced more SCSI drives on their theory that the problem was
bad disks than I care to remember.  And if they were sufficiently
suspicious of the disk drives for that, that tells you that they don't
trust the disk they're selling terribly much, which leaves me even
less reassured...
-- 
output = reverse(moc.liamg @ enworbbc)
http://linuxdatabases.info/info/spreadsheets.html
Where do you  *not* want to go today?  Confutatis maledictis, flammis
acribus addictis (http://www.hex.net/~cbbrowne/msprobs.html

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon

2005-04-20 Thread Josh Berkus
Joel,

Ok, please try this:

ALTER TABLE tblresponseheader ALTER COLUMN clientnum SET STATISTICS 1000;
ALTER TABLE tblresponseheader ALTER COLUMN locationid SET STATISTICS 1000;
ALTER TABLE tbllocation ALTER COLUMN clientnum SET STATISTICS 1000;
ALTER TABLE tbllocation ALTER COLUMN divisionid SET STATISTICS 1000;
ALTER TABLE tbllocation ALTER COLUMN regionid SET STATISTICS 1000;
ANALYZE tblresponseheader;
ANALYZE tbllocation;

Then run the EXPLAIN ANALYZE again.   (on Linux)


-- 
--Josh

Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] Joel's Performance Issues WAS : Opteron vs Xeon

2005-04-20 Thread Tom Lane
John A Meinel [EMAIL PROTECTED] writes:
 Joel Fradkin wrote:
 Postgres was on the second run
 Total query runtime: 17109 ms.
 Data retrieval runtime: 72188 ms.
 331640 rows retrieved.

 How were you measuring data retrieval time?

I suspect he's using pgadmin.  We've seen reports before suggesting that
pgadmin can be amazingly slow, eg here
http://archives.postgresql.org/pgsql-performance/2004-10/msg00427.php
where the *actual* data retrieval time as shown by EXPLAIN ANALYZE
was under three seconds, but pgadmin claimed the query runtime was 22
sec and data retrieval runtime was 72 sec.

I wouldn't be too surprised if that time was being spent formatting
the data into a table for display inside pgadmin.  It is a GUI after
all, not a tool for pushing vast volumes of data around.

It'd be interesting to check the runtimes for the same query with
LIMIT 3000, ie, see if a tenth as much data takes a tenth as much
processing time or not.  The backend code should be pretty darn
linear in this regard, but maybe pgadmin isn't.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PERFORM] How to improve db performance with $7K?

2005-04-20 Thread William Yu
The Linux kernel is definitely headed this way. The 2.6 allows for 
several different I/O scheduling algorithms. A brief overview about the 
different modes:

http://nwc.serverpipeline.com/highend/60400768
Although a much older article from the beta-2.5 days, more indepth info 
from one of the programmers who developed the AS scheduler and worked on 
the deadline scheduler:

http://kerneltrap.org/node/657
I think I'm going to start testing the deadline scheduler for our data 
processing server for a few weeks before trying it on our production 
servers.


Alex Turner wrote:
Whilst I admire your purist approach, I would say that if it is
beneficial to performance that a kernel understand drive geometry,
then it is worth investigating teaching it how to deal with that!
I was less referrring to the kernel as I was to the controller.
Lets say we invented a new protocol that including the drive telling
the controller how it was layed out at initialization time so that the
controller could make better decisions about re-ordering seeks.  It
would be more cost effective to have that set of electronics just once
in the controller, than 8 times on each drive in an array, which would
yield better performance to cost ratio.  Therefore I would suggest it
is something that should be investigated.  After all, why implemented
TCQ on each drive, if it can be handled more effeciently at the other
end by the controller for less money?!
Alex Turner
netEconomist
On 4/19/05, Dave Held [EMAIL PROTECTED] wrote:
-Original Message-
From: Alex Turner [mailto:[EMAIL PROTECTED]
Sent: Monday, April 18, 2005 5:50 PM
To: Bruce Momjian
Cc: Kevin Brown; pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How to improve db performance with $7K?
Does it really matter at which end of the cable the queueing is done
(Assuming both ends know as much about drive geometry etc..)?
[...]
The parenthetical is an assumption I'd rather not make.  If my
performance depends on my kernel knowing how my drive is laid
out, I would always be wondering if a new drive is going to
break any of the kernel's geometry assumptions.  Drive geometry
doesn't seem like a kernel's business any more than a kernel
should be able to decode the ccd signal of an optical mouse.
The kernel should queue requests at a level of abstraction that
doesn't depend on intimate knowledge of drive geometry, and the
drive should queue requests on the concrete level where geometry
matters.  A drive shouldn't guess whether a process is trying to
read a file sequentially, and a kernel shouldn't guess whether
sector 30 is contiguous with sector 31 or not.
__
David B. Held
Software Engineer/Array Services Group
200 14th Ave. East,  Sartell, MN 56377
320.534.3637 320.253.7800 800.752.8129
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
   http://www.postgresql.org/docs/faq
---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?
  http://www.postgresql.org/docs/faq