That was part of my original question - whether it makes sense to go for
a mid-range SunFire machine (64bit HW, 64bit OS), which is scalable to
high amounts of memory, and shouldn't have any issues addressing it all.
I've had that kind of setup once temporarily on a V480 (quad UltraSparc,
16GB RAM) machine, and it did well in production use. Without having the
time/resources to do extensive testing, I am not sure if
Postgres/Solaris9 is really suggested by the community for
high-performance, as opposed to a XEON/Linux setup. Storage being a
From: Chris Ruprecht [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 02, 2004 4:17 PM
To: Anjan Dave; [EMAIL PROTECTED]; William Yu
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] Scaling further up
If you have a DB of 'only' 13 GB and you do not expect it to grow much,
might be advisable to have enough memory (RAM) to hold the entire DB in
shared memory (everything is cached). If you have a server with say 24
memory and can allocate 20 GB for cache, you don't care about the speed
disks any more - all you worry about is the speed of your memory and
I believe, this not possible using 32-bit technology, you would have to
some 64-bit platform, but if it's speed you want ...
You can also try solid state hard disk drives. These are actually just
there are no moving parts, but the look and behave like very very fast
drives. I have seen them at capacities of 73 GB - but they didn't
price (I'd probably have a heart attack when I look at the price tag).
On Tuesday 02 March 2004 14:41, Anjan Dave wrote:
> "By lots I mean dozen(s) in a raid 10 array with a good controller."
> I believe, for RAID-10, I will need even number of drives. Currently,
> the size of the database is about 13GB, and is not expected to grow
> exponentially with thousands of concurrent users, so total space is
> not of paramount importance compared to performance.
> Does this sound reasonable setup?
> 10x36GB FC drives on RAID-10
> 4x36GB FC drives for the logs on RAID-10 (not sure if this is the
> correct ratio)? 1 hotspare
> Total=15 Drives per enclosure.
> Tentatively, I am looking at an entry-level EMC CX300 product with 2GB
> RAID cache, etc.
> Question - Are 73GB drives supposed to give better performance because
> of higher number of platters?
> -----Original Message-----
> From: Fred Moyer [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, March 02, 2004 5:57 AM
> To: William Yu; Anjan Dave
> Cc: [EMAIL PROTECTED]
> Subject: Re: [PERFORM] Scaling further up
> On Tue, 2004-03-02 at 17:42, William Yu wrote:
> > Anjan Dave wrote:
> > > We have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running
> > > RH9, PG 7.4.0. There's an internal U320, 10K RPM RAID-10 setup on
> > > 4
> > > We are expecting a pretty high load, a few thousands of
> > > 'concurrent' users executing either select, insert, update,
> > > statments.
> > The quick and dirty method would be to upgrade to the recently
> > announced 3GHz Xeon MPs with 4MB of L3. My semi-educated guess is
> > that you'd get
> > another +60% there due to the huge L3 hiding the Xeon's shared bus
> If you are going to have thousands of 'concurrent' users you should
> seriously consider the 2.6 kernel if you are running Linux or as an
> alternative going with FreeBSD. You will need to load test your
> system and become an expert on tuning Postgres to get the absolute
> maximum performance from each and every query you have.
> And you will need lots of hard drives. By lots I mean dozen(s) in a
> raid 10 array with a good controller. Thousands of concurrent users
> means hundreds or thousands of transactions per second. I've
> personally seen it scale that far but in my opinion you will need a
> lot more hard drives and ram than cpu.
> ---------------------------(end of
> TIP 7: don't forget to increase your free space map settings
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])