Re: [PERFORM] Request for feedback on hardware for a new database server

2011-03-22 Thread Merlin Moncure
On Thu, Mar 17, 2011 at 7:51 PM, Oliver Charles
postgresql-p...@ocharles.org.uk wrote:
 Hello,

 At MusicBrainz we're looking to get a new database server, and are
 hoping to buy this in the next couple of days. I'm mostly a software
 guy, but I'm posting this on behalf of Rob, who's actually going to be
 buying the hardware. Here's a quote of what we're looking to get:

    I'm working to spec out a bad-ass 1U database server with loads of
    cores (12), RAM (24GB) and drives (4 SAS) in a hardware RAID-1,0
    configuration:

    1 * SuperMicro 2016R-URF, 1U, redundant power supply, 4 SATA/SAS
    drive bays 2
    2 * Intel Xeon X5650 Westmere 2.66GHz 12MB L3 Cache LGA 1366 95W
    Six-Core Server Processor 2
    2 * Crucial 24GB (3 x 4GB) DDR3 SDRAM ECC Registered DDR3 1333,
    CT3KIT51272BV1339 1
    1 * LSI MegaRAID SATA/SAS 9260-4i ($379) (linux support [1])
    or
    1 * HighPoint RocketRAID 4320 PCI-Express x8 ($429)
    or
    1 * Adaptec RAID 3405 controller ($354)
    4 * Fujitsu MBA3147RC 147GB 15000 RPM

    SuperMicro machines have treated us really well over time (better
    than Dell or Sun boxes), so I am really happy to throw more money in
    their direction.  Redundant power supplies seem like a good idea for
    a database server.

    For $400 more we can get hexa core processors as opposed to quad
    core processors at 2.66Ghz. This seems like a really good deal --
    any thoughts on this?

    Crucial memory has also served us really well, so that is a
    no-brainer.

    The RAID controller cards are where I need to most feedback! Of the
    LSI, Highpoint or Adaptec cards, which one is likely to have native
    linux support that does not require custom drivers to be installed?
    The LSI card has great specs at a great price point with Linux
    support, but installing the custom driver sounds like a pain. Does
    anyone have any experience with these cards?

    We've opted to not go for SSD drives in the server just yet -- it
    doesn't seem clear how well SSDs do in a driver environment.

    That's it -- anyone have any feedback?

 Just a quick bit more information. Our database is certainly weighted
 towards being read heavy, rather than write heavy (with a read-only web
 service accounting for ~90% of our traffic). Our tables vary in size,
 with the upperbound being around 10mil rows.

It doesn't sound like SSD are a good fit for you -- you have small
enough data that you can easily buffer in RAM and not enough writing
to bottleneck you on the I/O side.  The #1 server building mistake is
focusing too much on cpu and not enough on i/o, but as noted by others
you should be ok with a decent raid controller with a bbu on it.  A
bbu will make a tremendous difference in server responsiveness to
sudden write bursts (like vacuum), which is particularly critical with
your whole setup being on a single physical volume.

Keeping your o/s and the db on the same LUN is a dangerous btw because
it can limit your ability to log in and deal with certain classes of
emergency situations.  It's possible to do a hybrid type setup where
you keep your o/s mounted on a CF or even a thumb drive(s) (most 1U
servers now have internal usb ports for exactly this purpose) but this
takes a certain bit of preparation and understanding what is sane to
do with flash..

My other concern with your setup is you might not have room for
expansion unless you have an unallocated pci-e slot in the back (some
1U have 1, some have 2).  With an extra slot, you can pop a sas hba in
the future attached to an enclosure if your storage requirements go up
significantly.

Option '2' is to go all out on the raid controller right now, so that
you have both internal and external sas ports, although these tend to
be much more expensive.  Option '3' is to just 2U now, leaving
yourself room for backplane expansion.

Putting it all together, I am not a fan of 1U database boxes unless
you are breaking the storage out -- there are ways you can get burned
so that you have to redo all your storage volumes (assuming you are
not using LVM, which I have very mixed feelings about) or even buy a
completely new server -- both scenarios can be expensive in terms of
downtime.

merlin

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Request for feedback on hardware for a new database server

2011-03-18 Thread Jesper Krogh

On 2011-03-18 01:51, Oliver Charles wrote:

Hello,

At MusicBrainz we're looking to get a new database server, and are
hoping to buy this in the next couple of days. I'm mostly a software
guy, but I'm posting this on behalf of Rob, who's actually going to be
buying the hardware. Here's a quote of what we're looking to get:


I think most of it has been said already:
* Battery backed write cache
* See if you can get enough memory to make all of your active
   dataset fit in memory. (typically not that hard in 2011).
* Dependent on your workload of-course, you're typically not
  bottlenecked by the amount of cpu-cores, so strive for fewer
  faster cores.
* As few sockets as you can screeze you memory and cpu-requirements
  onto.
* If you can live with (or design around) the tradeoffs with SSD it
  will buy you way more performance than any significant number
  of rotating drives. (a good backup plan with full WAL-log to a second
  system as an example).


--
Jesper

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Request for feedback on hardware for a new database server

2011-03-18 Thread Arjen van der Meijden

On 18-3-2011 4:02 Scott Marlowe wrote:

On Thu, Mar 17, 2011 at 6:51 PM, Oliver Charles
postgresql-p...@ocharles.org.uk  wrote:

Another point.  My experience with 1U chassis and cooling is that they
don't move enough air across their cards to make sure they stay cool.
You'd be better off ordering a 2U chassis with 8 3.5 drive bays so
you can add drives later if you need to, and it'll provide more
cooling air across the card.

Our current big 48 core servers are running plain LSI SAS adapters
without HW RAID because the LSI s we were using overheated and
cooked themselves to death after about 3 months.  Those are 1U chassis
machines, and our newer machines are all 2U boxes now.


We have several 1U boxes (mostly Dell and Sun) running and had several 
in the past. And we've never had any heating problems with them. That 
includes machines with more power hungry processors than are currently 
available, all power slurping FB-dimm slots occupied and two raid cards 
installed.


But than again, a 2U box will likely have more cooling capacity, no 
matter how you look at it.


Another tip that may be useful; look at 2.5 drives. Afaik there is no 
really good reason to use 3.5 drives for new servers. The 2.5 drives 
save power and room - and thus may allow more air flowing through the 
enclosure - and offer the same performance and reliability (the first I 
know for sure, the second I'm pretty sure of but haven't seen much proof 
of lately).


You could even have a 8- or 10-disk 1U enclosure in that way or up to 24 
disks in 2U. But those configurations will require some attention to 
cooling again.


Best regards,

Arjen

--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Request for feedback on hardware for a new database server

2011-03-18 Thread Scott Marlowe
On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
acmmail...@tweakers.net wrote:
 On 18-3-2011 4:02 Scott Marlowe wrote:

 On Thu, Mar 17, 2011 at 6:51 PM, Oliver Charles
 postgresql-p...@ocharles.org.uk  wrote:

 Another point.  My experience with 1U chassis and cooling is that they
 don't move enough air across their cards to make sure they stay cool.
 You'd be better off ordering a 2U chassis with 8 3.5 drive bays so
 you can add drives later if you need to, and it'll provide more
 cooling air across the card.

 Our current big 48 core servers are running plain LSI SAS adapters
 without HW RAID because the LSI s we were using overheated and
 cooked themselves to death after about 3 months.  Those are 1U chassis
 machines, and our newer machines are all 2U boxes now.

 We have several 1U boxes (mostly Dell and Sun) running and had several in
 the past. And we've never had any heating problems with them. That includes
 machines with more power hungry processors than are currently available, all
 power slurping FB-dimm slots occupied and two raid cards installed.

Note I am talking specifically about the ability to cool the RAID
card, not the CPUS etc.  Many 1U boxes have poor air flow across the
expansion slots for PCI / etc cards, while doing a great job cooling
the CPUs and memory.  If you don't use high performance RAID cards
(LSI 9xxx  Areca 16xx 18xx) then it's not an issue.  Open up your 1U
and look at the air flow for the expansion slots, it's often just not
very much.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Request for feedback on hardware for a new database server

2011-03-18 Thread Arjen van der Meijden

On 18-3-2011 10:11, Scott Marlowe wrote:

On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
acmmail...@tweakers.net  wrote:

On 18-3-2011 4:02 Scott Marlowe wrote:
We have several 1U boxes (mostly Dell and Sun) running and had several in
the past. And we've never had any heating problems with them. That includes
machines with more power hungry processors than are currently available, all
power slurping FB-dimm slots occupied and two raid cards installed.


Note I am talking specifically about the ability to cool the RAID
card, not the CPUS etc.  Many 1U boxes have poor air flow across the
expansion slots for PCI / etc cards, while doing a great job cooling
the CPUs and memory.  If you don't use high performance RAID cards
(LSI 9xxx  Areca 16xx 18xx) then it's not an issue.  Open up your 1U
and look at the air flow for the expansion slots, it's often just not
very much.



I was referring to amongst others two machines that have both a Dell 
Perc 5/i for internal disks and a Perc 5/e for an external disk 
enclosure. Those also had processors that produce quite some heat (2x 
X5160 and 2x X5355) combined with all fb-dimm (8x 2GB) slots filled, 
which also produce a lot of heat. Those Dell Perc's are similar to the 
LSI's from the same period in time.


So the produced heat form the other components was already pretty high. 
Still, I've seen no problems with heat for any component, including all 
four raid controllers. But I agree, there are some 1U servers that skimp 
on fans and thus air flow in the system. We've not had that problem with 
any of our systems. But both Sun and Dell seem to add quite a bit of 
fans in the middle of the system, where others may do it a bit less 
heavy duty and less over-dimensioned.


Best regards,

Arjen


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Request for feedback on hardware for a new database server

2011-03-18 Thread Claudio Freire
On Fri, Mar 18, 2011 at 3:19 AM, Jesper Krogh jes...@krogh.cc wrote:
 * Dependent on your workload of-course, you're typically not
  bottlenecked by the amount of cpu-cores, so strive for fewer
  faster cores.

Depending on your workload again, but faster memory is even more
important than faster math.

So go for the architecture with the fastest memory bus.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Request for feedback on hardware for a new database server

2011-03-18 Thread Scott Marlowe
On Fri, Mar 18, 2011 at 6:44 AM, Arjen van der Meijden
acmmail...@tweakers.net wrote:
 On 18-3-2011 10:11, Scott Marlowe wrote:

 On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
 acmmail...@tweakers.net  wrote:

 On 18-3-2011 4:02 Scott Marlowe wrote:
 We have several 1U boxes (mostly Dell and Sun) running and had several in
 the past. And we've never had any heating problems with them. That
 includes
 machines with more power hungry processors than are currently available,
 all
 power slurping FB-dimm slots occupied and two raid cards installed.

 Note I am talking specifically about the ability to cool the RAID
 card, not the CPUS etc.  Many 1U boxes have poor air flow across the
 expansion slots for PCI / etc cards, while doing a great job cooling
 the CPUs and memory.  If you don't use high performance RAID cards
 (LSI 9xxx  Areca 16xx 18xx) then it's not an issue.  Open up your 1U
 and look at the air flow for the expansion slots, it's often just not
 very much.


 I was referring to amongst others two machines that have both a Dell Perc
 5/i for internal disks and a Perc 5/e for an external disk enclosure. Those
 also had processors that produce quite some heat (2x X5160 and 2x X5355)
 combined with all fb-dimm (8x 2GB) slots filled, which also produce a lot of
 heat. Those Dell Perc's are similar to the LSI's from the same period in
 time.

 So the produced heat form the other components was already pretty high.
 Still, I've seen no problems with heat for any component, including all four
 raid controllers. But I agree, there are some 1U servers that skimp on fans
 and thus air flow in the system. We've not had that problem with any of our
 systems. But both Sun and Dell seem to add quite a bit of fans in the middle
 of the system, where others may do it a bit less heavy duty and less
 over-dimensioned.

Most machines have different pathways for cooling airflow over their
RAID cards, and they don't share that air flow with the CPUs.  Also,
the PERC RAID controllers do not produce a lot of heat.  The CPUs on
the high performance LSI or Areca controllers are often dual core high
performance CPUs in their own right, and those cards have heat sinks
with fans on them to cool them.  The cards themselves are what make so
much heat and don't get enough cooling in many 1U servers.  It has
nothing to do with what else is in the server, again because the
airflow for the cards is usually separate.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Request for feedback on hardware for a new database server

2011-03-18 Thread Scott Marlowe
On Fri, Mar 18, 2011 at 10:32 AM, Scott Marlowe scott.marl...@gmail.com wrote:
 On Fri, Mar 18, 2011 at 6:44 AM, Arjen van der Meijden
 acmmail...@tweakers.net wrote:
 On 18-3-2011 10:11, Scott Marlowe wrote:

 On Fri, Mar 18, 2011 at 1:16 AM, Arjen van der Meijden
 acmmail...@tweakers.net  wrote:

 On 18-3-2011 4:02 Scott Marlowe wrote:
 We have several 1U boxes (mostly Dell and Sun) running and had several in
 the past. And we've never had any heating problems with them. That
 includes
 machines with more power hungry processors than are currently available,
 all
 power slurping FB-dimm slots occupied and two raid cards installed.

 Note I am talking specifically about the ability to cool the RAID
 card, not the CPUS etc.  Many 1U boxes have poor air flow across the
 expansion slots for PCI / etc cards, while doing a great job cooling
 the CPUs and memory.  If you don't use high performance RAID cards
 (LSI 9xxx  Areca 16xx 18xx) then it's not an issue.  Open up your 1U
 and look at the air flow for the expansion slots, it's often just not
 very much.


 I was referring to amongst others two machines that have both a Dell Perc
 5/i for internal disks and a Perc 5/e for an external disk enclosure. Those
 also had processors that produce quite some heat (2x X5160 and 2x X5355)
 combined with all fb-dimm (8x 2GB) slots filled, which also produce a lot of
 heat. Those Dell Perc's are similar to the LSI's from the same period in
 time.

 So the produced heat form the other components was already pretty high.
 Still, I've seen no problems with heat for any component, including all four
 raid controllers. But I agree, there are some 1U servers that skimp on fans
 and thus air flow in the system. We've not had that problem with any of our
 systems. But both Sun and Dell seem to add quite a bit of fans in the middle
 of the system, where others may do it a bit less heavy duty and less
 over-dimensioned.

 Most machines have different pathways for cooling airflow over their
 RAID cards, and they don't share that air flow with the CPUs.  Also,
 the PERC RAID controllers do not produce a lot of heat.  The CPUs on
 the high performance LSI or Areca controllers are often dual core high
 performance CPUs in their own right, and those cards have heat sinks
 with fans on them to cool them.  The cards themselves are what make so
 much heat and don't get enough cooling in many 1U servers.  It has
 nothing to do with what else is in the server, again because the
 airflow for the cards is usually separate.

As a followup to this subject, the problem wasn't bad until the server
load increased, thus increasing the load on the LSI MegaRAID card, at
which point it started producing more heat than it had before.  When
the machine wasn't working too hard the LSI was fine.  Once we started
hitting higher and higher load is when the card had issues.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] Request for feedback on hardware for a new database server

2011-03-17 Thread Oliver Charles
Hello,

At MusicBrainz we're looking to get a new database server, and are
hoping to buy this in the next couple of days. I'm mostly a software
guy, but I'm posting this on behalf of Rob, who's actually going to be
buying the hardware. Here's a quote of what we're looking to get:

I'm working to spec out a bad-ass 1U database server with loads of
cores (12), RAM (24GB) and drives (4 SAS) in a hardware RAID-1,0
configuration:

1 * SuperMicro 2016R-URF, 1U, redundant power supply, 4 SATA/SAS
drive bays 2
2 * Intel Xeon X5650 Westmere 2.66GHz 12MB L3 Cache LGA 1366 95W
Six-Core Server Processor 2
2 * Crucial 24GB (3 x 4GB) DDR3 SDRAM ECC Registered DDR3 1333,
CT3KIT51272BV1339 1
1 * LSI MegaRAID SATA/SAS 9260-4i ($379) (linux support [1])
or
1 * HighPoint RocketRAID 4320 PCI-Express x8 ($429)
or
1 * Adaptec RAID 3405 controller ($354)
4 * Fujitsu MBA3147RC 147GB 15000 RPM

SuperMicro machines have treated us really well over time (better
than Dell or Sun boxes), so I am really happy to throw more money in
their direction.  Redundant power supplies seem like a good idea for
a database server.

For $400 more we can get hexa core processors as opposed to quad
core processors at 2.66Ghz. This seems like a really good deal --
any thoughts on this?

Crucial memory has also served us really well, so that is a
no-brainer.

The RAID controller cards are where I need to most feedback! Of the
LSI, Highpoint or Adaptec cards, which one is likely to have native
linux support that does not require custom drivers to be installed?
The LSI card has great specs at a great price point with Linux
support, but installing the custom driver sounds like a pain. Does
anyone have any experience with these cards?

We've opted to not go for SSD drives in the server just yet -- it
doesn't seem clear how well SSDs do in a driver environment.

That's it -- anyone have any feedback?

Just a quick bit more information. Our database is certainly weighted
towards being read heavy, rather than write heavy (with a read-only web
service accounting for ~90% of our traffic). Our tables vary in size,
with the upperbound being around 10mil rows.

I'm not sure exactly what more to say - but any feedback is definitely
appreciated. We're hoping to purchase this server on Monday, I
believe. Any questions, ask away!

Thanks,
- Ollie

[1]: 
http://www.lsi.com/storage_home/products_home/internal_raid/megaraid_sas/entry_line/megaraid_sas_9240-4i/index.html

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Request for feedback on hardware for a new database server

2011-03-17 Thread Steve Atkins

On Mar 17, 2011, at 5:51 PM, Oliver Charles wrote:

 Hello,
 
 At MusicBrainz we're looking to get a new database server, and are
 hoping to buy this in the next couple of days. I'm mostly a software
 guy, but I'm posting this on behalf of Rob, who's actually going to be
 buying the hardware. Here's a quote of what we're looking to get:
 
I'm working to spec out a bad-ass 1U database server with loads of
cores (12), RAM (24GB) and drives (4 SAS) in a hardware RAID-1,0
configuration:
 
1 * SuperMicro 2016R-URF, 1U, redundant power supply, 4 SATA/SAS
drive bays 2
2 * Intel Xeon X5650 Westmere 2.66GHz 12MB L3 Cache LGA 1366 95W
Six-Core Server Processor 2
2 * Crucial 24GB (3 x 4GB) DDR3 SDRAM ECC Registered DDR3 1333,
CT3KIT51272BV1339 1
1 * LSI MegaRAID SATA/SAS 9260-4i ($379) (linux support [1])
or
1 * HighPoint RocketRAID 4320 PCI-Express x8 ($429)
or
1 * Adaptec RAID 3405 controller ($354)
4 * Fujitsu MBA3147RC 147GB 15000 RPM

 
That's it -- anyone have any feedback?


I'm no expert, but...

That's very few drives. Even if you turn them into a single array
(rather than separating out a raid pair for OS and a raid pair
for WAL and raid 10 array for data) that's going to give you
very little IO bandwidth, especially for typical random
access work.

Unless your entire database active set fits in RAM I'd expect your
cores to sit idle waiting on disk IO much of the time.

Don't forget that you need a BBU for whichever RAID controller
you need, or it won't be able to safely do writeback caching, and
you'll lose a lot of the benefit.

Cheers,
  Steve


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] Request for feedback on hardware for a new database server

2011-03-17 Thread Scott Marlowe
On Thu, Mar 17, 2011 at 6:51 PM, Oliver Charles
postgresql-p...@ocharles.org.uk wrote:
 Hello,

 At MusicBrainz we're looking to get a new database server, and are
 hoping to buy this in the next couple of days. I'm mostly a software
 guy, but I'm posting this on behalf of Rob, who's actually going to be
 buying the hardware. Here's a quote of what we're looking to get:

    I'm working to spec out a bad-ass 1U database server with loads of
    cores (12), RAM (24GB) and drives (4 SAS) in a hardware RAID-1,0
    configuration:

    1 * SuperMicro 2016R-URF, 1U, redundant power supply, 4 SATA/SAS
    drive bays 2
    2 * Intel Xeon X5650 Westmere 2.66GHz 12MB L3 Cache LGA 1366 95W
    Six-Core Server Processor 2
    2 * Crucial 24GB (3 x 4GB) DDR3 SDRAM ECC Registered DDR3 1333,
    CT3KIT51272BV1339 1
    1 * LSI MegaRAID SATA/SAS 9260-4i ($379) (linux support [1])
    or
    1 * HighPoint RocketRAID 4320 PCI-Express x8 ($429)
    or
    1 * Adaptec RAID 3405 controller ($354)
    4 * Fujitsu MBA3147RC 147GB 15000 RPM

    SuperMicro machines have treated us really well over time (better
    than Dell or Sun boxes), so I am really happy to throw more money in
    their direction.  Redundant power supplies seem like a good idea for
    a database server.

    For $400 more we can get hexa core processors as opposed to quad
    core processors at 2.66Ghz. This seems like a really good deal --
    any thoughts on this?

    Crucial memory has also served us really well, so that is a
    no-brainer.

    The RAID controller cards are where I need to most feedback! Of the
    LSI, Highpoint or Adaptec cards, which one is likely to have native
    linux support that does not require custom drivers to be installed?
    The LSI card has great specs at a great price point with Linux
    support, but installing the custom driver sounds like a pain. Does
    anyone have any experience with these cards?

    We've opted to not go for SSD drives in the server just yet -- it
    doesn't seem clear how well SSDs do in a driver environment.

    That's it -- anyone have any feedback?

 Just a quick bit more information. Our database is certainly weighted
 towards being read heavy, rather than write heavy (with a read-only web
 service accounting for ~90% of our traffic). Our tables vary in size,
 with the upperbound being around 10mil rows.

 I'm not sure exactly what more to say - but any feedback is definitely
 appreciated. We're hoping to purchase this server on Monday, I
 believe. Any questions, ask away!

I order my boxes from a white box builder called Aberdeen.  They'll
test whatever hardware you want with whatever OS you want to make sure
it works before sending it out.  As far as I know the LSI card should
just work with linux, if not, the previous rev should work fine (the
LSI ).  I prefer Areca RAID 1680/1880 cards, they run cooler and
faster than the LSIs.

Another point.  My experience with 1U chassis and cooling is that they
don't move enough air across their cards to make sure they stay cool.
You'd be better off ordering a 2U chassis with 8 3.5 drive bays so
you can add drives later if you need to, and it'll provide more
cooling air across the card.

Our current big 48 core servers are running plain LSI SAS adapters
without HW RAID because the LSI s we were using overheated and
cooked themselves to death after about 3 months.  Those are 1U chassis
machines, and our newer machines are all 2U boxes now.  BTW, if you
ever need more than 2 sockets, right now the Magny Cours AMDs are the
fastest in that arena.  For 2 sockets the Nehalem based machines are
about equal to them.

The high point RAID controllers are toys (or at least they were last I checked).

If you have to go with 4 drives just make it one big RAID-10 array and
then partition that out into 3 or 4 partitions.  It's important to put
pg_xlog on a different partition even if it's on the same array, as it
allows the OS to fsync it separately.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance