Re: [PERFORM] Quad Xeon or Quad Opteron?

2008-05-24 Thread Andrzej Zawadzki
Craig Ringer wrote:
 Andrzej Zawadzki wrote:
   
 Hello,

  We're planning new production server for PostgreSQL and I'm wondering
 which processor (or even platform) will be better: Quad Xeon or Quad
 Opteron (for example SUN now has a new offer Sun Fire X4440 x64).
 

 [snip]

   
 Suggestions?
 

 To get a more useful response here, you might want to include some
 information about your workload and database size, and report on your
 planned disk subsystem and RAM.
   
Disk subsystem:
Hitachi AMS200, 12x10krpm SAS drives in RAID 10 (+1 hot spare), 1GB mem
with battery
Database is ~60GB and growing ;-)
Workloads: ~94% - SELECTS
Q/sek: Avg~300 (1000 in peak)

Server:
v40z is a 4xdouble core with 16GB RAM

 Also, based on what I've seen on this list rather than personal
 experience, you might want to give more thought to your storage than to
 CPU power. The usual thrust of advice seems to be: Get a fast, battery
 backed RAID controller. Fast does not mean fast sequential I/O in
 ideal conditions so marketing can print a big number on the box; you
 need to consider random I/O too. Get lots of fast disks. Get enough RAM
 to ensure that your indexes fit in RAM if possible.
   
Yes, of course You are right: disks are very important - I know that
especially after switch to SAN.
But server is getting older ;-) - I need good warranty - I have 3 years
from SUN for example.

ps. After reading about HP: SA P800 with StorageWorks MSA70 I'm
considering buying such storage with ~20 disks.

[...]

-- 
Andrzej Zawadzki


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Quad Xeon or Quad Opteron?

2008-05-24 Thread Andrzej Zawadzki
Knight, Doug wrote:
 Hi,
 As a gauge, we recently purchased several servers as our systems get
 close to going operational. We bought Dell 2900s, with the cheapest quad
 core processors (dual) and put most of the expense into lots of drives
 (8 15K 146GB SAS drives in a RAID 10 set), and the PERC 6 embedded
 controller with 512MB battery backed cache. That gives us more spindles,
 the RAID redundancy we want, plus the high, reliable throughput of the
 BBC. The OS (and probably WAL) will run on a RAID 1 pair of 15K 76GB
 drives. We also went with 8GB memory, which seemed to be the price cost
 point in these systems (going above 8GB had a much higher cost).
 Besides, in our prototyping, or systems had 2GB, which we rarely
 exceeded, so 8GB should be plently (and we can always expand). 

 So really, if you can save money on processors by going Opteron (and
 your IT department doesn't have an Intel-based system requirement like
 ours), put what you save into a good disk I/O subsystem. Hope that
 helps.
   
Top posting? Bleee ;-) How to read now?

OK I know that IO is most important for database but: I'm sorry, my
question is about processor/platform choice? :-)
I have to buy new server and I want optimal one.
Like I've wrote in different email my IO subsystem is quite good for now.

ps. To admin of that list: what is with Reply-to on that list?



 Doug

 -Original Message-
 From: [EMAIL PROTECTED]
 [mailto:[EMAIL PROTECTED] On Behalf Of Adam Tauno
 Williams
 Sent: Friday, May 23, 2008 8:22 AM
 To: pgsql-performance
 Subject: Re: [PERFORM] Quad Xeon or Quad Opteron?

   
 Also, based on what I've seen on this list rather than personal
 experience, you might want to give more thought to your storage than
 
 to
   
 CPU power. The usual thrust of advice seems to be: Get a fast, battery
 backed RAID controller. Fast does not mean fast sequential I/O in
 ideal conditions so marketing can print a big number on the box; you
 need to consider random I/O too. Get lots of fast disks. Get enough
 
 RAM
   
 to ensure that your indexes fit in RAM if possible.
 Note, however, that I have no direct experience with big Pg databases;
 I'm just trying to provide you with a guide of what information to
 provide and what to think about so you can get better answers here
 
 from
   
 people who actually have a clue.
 

 Yep,  we've had PostreSQL databases for a long time.  The various
 current generation processors, IMO, have no substantive difference in
 practice;  at least not relative to the bang-for-the-buck or more RAM
 and good I/O.


   


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Creating large database of MD5 hash values

2008-05-24 Thread Decibel!

On Apr 11, 2008, at 10:25 AM, Alvaro Herrera wrote:

Sorry, yes, I'm behind on email... :(


If MD5 values will be your primary data and you'll be storing millions
of them, it would be wise to create your own datatype and operators  
with

the most compact and efficient representation possible.



If you do this *please* post it. I really think it would be worth  
while for us to have fixed-size data types for common forms of binary  
data; MD5, SHA1 and SHA256 come to mind.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [PERFORM] shared_buffers performance

2008-05-24 Thread Decibel!

On Apr 14, 2008, at 3:31 PM, Tom Lane wrote:

Gregory Stark [EMAIL PROTECTED] writes:
The transition domain where performance drops dramatically as the  
database
starts to not fit in shared buffers but does still fit in  
filesystem cache.


It looks to me like the knee comes where the DB no longer fits in
filesystem cache.  What's interesting is that there seems to be no
synergy at all between shared_buffers and the filesystem cache.
Ideally, very hot pages would stay in shared buffers and drop out  
of the

kernel cache, allowing you to use a database approximating all-of-RAM
before you hit the performance wall.  It's clear that in this example
that's not happening, or at least that only a small part of shared
buffers isn't getting duplicated in filesystem cache.


I suspect that we're getting double-buffering on everything because  
every time we dirty a buffer and write it out the OS is considering  
that as access, and keeping that data in it's cache. It would be  
interesting to try an overcome that and see how it impacts things.  
With our improvement in checkpoint handling, we might be able to just  
write via DIO... if not maybe there's some way to tell the OS to  
buffer the write for us, but target that data for removal from cache  
as soon as it's written.



Of course, that's because pgbench reads a randomly-chosen row of
accounts in each transaction, so that there's exactly zero locality
of access.  A more realistic workload would probably have a Zipfian
distribution of account number touches, and might look a little better
on this type of test.


--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [PERFORM] Posible planner improvement?

2008-05-24 Thread Decibel!

Moving to -hackers...

On May 21, 2008, at 9:09 AM, Richard Huxton wrote:

Luke Lonergan wrote:

The problem is that the implied join predicate is not being
propagated.  This is definitely a planner deficiency.


IIRC only equality conditions are propagated and gt, lt, between  
aren't.  I seem to remember that the argument given was that the  
cost of checking for the ability to propagate was too high for the  
frequency when it ocurred.


Of course, what was true for code and machines of 5 years ago might  
not be so today.


Definitely...

How hard would it be to propagate all conditions (except maybe  
functions, though perhaps the new function cost estimates make that  
more practical) in cases of equality?


For reference, the original query as posted to -performance:

select * from t1, t2 where t1.id  158507 and t1.id = t2.id;

That took  84 minutes (the query was a bit longer but this is the  
part that made the difference) after a little change the query took  
~1 second:


select * from t1, t2 where t1.id  158507 and t2.id  158507 and  
t1.id = t2.id;


--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [PERFORM] I/O on select count(*)

2008-05-24 Thread Decibel!

On May 18, 2008, at 1:28 AM, Greg Smith wrote:
I just collected all the good internals information included in  
this thread and popped it onto http://wiki.postgresql.org/wiki/ 
Hint_Bits where I'll continue to hack away at the text until it's  
readable.  Thanks to everyone who answered my questions here,  
that's good progress toward clearing up a very underdocumented area.


I note a couple of potential TODO items not on the official list  
yet that came up during this discussion:


-Smooth latency spikes when switching commit log pages by  
preallocating cleared pages before they are needed


-Improve bulk loading by setting frozen hint bits for tuple  
inserts which occur within the same database transaction as the  
creation of the table into which they're being inserted


Did I miss anything?  I think everything brought up falls either  
into one of those two or the existing Consider having the  
background writer update the transaction status hint bits... TODO.


-Evaluate impact of improved caching of CLOG per Greenplum:

Per Luke Longergan:
I'll find out if we can extract our code that did the work. It was  
simple but scattered in a few routines. In concept it worked like this:


1 - Ignore if hint bits are unset, use them if set.  This affects  
heapam and vacuum I think.
2 - implement a cache for clog lookups based on the optimistic  
assumption that the data was inserted in bulk.  Put the cache one  
call away from heapgetnext()


I forget the details of (2).  As I recall, if we fall off of the  
assumption, the penalty for long scans get large-ish (maybe 2X), but  
since when do people full table scan when they're updates/inserts are  
so scattered across TIDs?  It's an obvious big win for DW work.


--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [PERFORM] I/O on select count(*)

2008-05-24 Thread Decibel!

On May 18, 2008, at 1:28 AM, Greg Smith wrote:
I just collected all the good internals information included in  
this thread and popped it onto http://wiki.postgresql.org/wiki/ 
Hint_Bits where I'll continue to hack away at the text until it's  
readable.  Thanks to everyone who answered my questions here,  
that's good progress toward clearing up a very underdocumented area.


I note a couple of potential TODO items not on the official list  
yet that came up during this discussion:


-Smooth latency spikes when switching commit log pages by  
preallocating cleared pages before they are needed


-Improve bulk loading by setting frozen hint bits for tuple  
inserts which occur within the same database transaction as the  
creation of the table into which they're being inserted


Did I miss anything?  I think everything brought up falls either  
into one of those two or the existing Consider having the  
background writer update the transaction status hint bits... TODO.


Blah, sorry for the double-post, but I just remembered a few things...

Did we completely kill the idea of the bg_writer *or some other  
background process* being responsible for setting all hint-bits on  
dirty pages before they're written out?


Also, Simon and Tom had an idea at PGCon: Don't set hint-bits in the  
back-end if the page isn't already dirty. We'd likely need some  
heuristics on this... based on Luke's comments about improved CLOG  
caching maybe we want to set the bits anyway if the tuples without  
them set are from old transactions (idea being that pulling those  
CLOG pages would be pretty expensive). Or better yet; if we have to  
read a CLOG page off disk, set the bits.


This could still potentially be a big disadvantage for data  
warehouses; though perhaps the way to fix that is recommend a  
backgrounded vacuum after data load.

--
Decibel!, aka Jim C. Nasby, Database Architect  [EMAIL PROTECTED]
Give your computer some brain candy! www.distributed.net Team #1828




smime.p7s
Description: S/MIME cryptographic signature


Re: [PERFORM] Quad Xeon or Quad Opteron?

2008-05-24 Thread Jeffrey Baker
On Fri, May 23, 2008 at 3:41 AM, Andrzej Zawadzki [EMAIL PROTECTED] wrote:

 Hello,

  We're planning new production server for PostgreSQL and I'm wondering
 which processor (or even platform) will be better: Quad Xeon or Quad
 Opteron (for example SUN now has a new offer Sun Fire X4440 x64).

 When I was buying my last database server, then SUN v40z was a really
 very good choice (Intel's base server was slower). This v40z still works
 pretty good but I need one more.

 AFAIK Intel made some changes in chipset but... is this better then AMD
 HyperTransport and Direct Connect Architecture from database point of
 view? How about L3 cache - is this important for performance?


Intel's chipset is still broken when using dual sockets and quad core
processors.  The problem manifests itself as excessive cache line bouncing.
In my opinion the best bang/buck combo on the CPU side is the fastest
dual-core Xeon CPUs you can find.  You get excellent single-thread
performance and you still have four processors, which was a fantasy for most
people only 5 years ago.  In addition you can put a ton of memory in the new
Xeon machines.  64GB is completely practical.

I still run several servers on Opterons but in my opinion they don't make
sense right now unless you truly need the CPU parallelism.

-jwb