Hi all,

If you have a DB of 'only' 13 GB and you do not expect it to grow much, it 
might be advisable to have enough memory (RAM) to hold the entire DB in 
shared memory (everything is cached). If you have a server with say 24 GB or 
memory and can allocate 20 GB for cache, you don't care about the speed of 
disks any more - all you worry about is the speed of your memory and your 
network connection.
I believe, this not possible using 32-bit technology, you would have to go to 
some 64-bit platform, but if it's speed you want ...
You can also try solid state hard disk drives. These are actually just meory, 
there are no moving parts, but the look and behave like very very fast disk 
drives. I have seen them at capacities of 73 GB - but they didn't mention the 
price (I'd probably have a heart attack when I look at the price tag).

Best regards,

On Tuesday 02 March 2004 14:41, Anjan Dave wrote:
> "By lots I mean dozen(s) in a raid 10 array with a good controller."
> I believe, for RAID-10, I will need even number of drives. Currently,
> the size of the database is about 13GB, and is not expected to grow
> exponentially with thousands of concurrent users, so total space is not
> of paramount importance compared to performance.
> Does this sound reasonable setup?
> 10x36GB FC drives on RAID-10
> 4x36GB FC drives for the logs on RAID-10 (not sure if this is the
> correct ratio)?
> 1 hotspare
> Total=15 Drives per enclosure.
> Tentatively, I am looking at an entry-level EMC CX300 product with 2GB
> RAID cache, etc.
> Question - Are 73GB drives supposed to give better performance because
> of higher number of platters?
> Thanks,
> Anjan
> -----Original Message-----
> From: Fred Moyer [mailto:[EMAIL PROTECTED]
> Sent: Tuesday, March 02, 2004 5:57 AM
> To: William Yu; Anjan Dave
> Subject: Re: [PERFORM] Scaling further up
> On Tue, 2004-03-02 at 17:42, William Yu wrote:
> > Anjan Dave wrote:
> > > We have a Quad-Intel XEON 2.0GHz (1MB cache), 12GB memory, running
> > > RH9,
> > > PG 7.4.0. There's an internal U320, 10K RPM RAID-10 setup on 4
> drives.
> > > We are expecting a pretty high load, a few thousands of 'concurrent'
> > > users executing either select, insert, update, statments.
> >
> > The quick and dirty method would be to upgrade to the recently
> > announced
> > 3GHz Xeon MPs with 4MB of L3. My semi-educated guess is that you'd get
> >
> > another +60% there due to the huge L3 hiding the Xeon's shared bus
> penalty.
> If you are going to have thousands of 'concurrent' users you should
> seriously consider the 2.6 kernel if you are running Linux or as an
> alternative going with FreeBSD.  You will need to load test your system
> and become an expert on tuning Postgres to get the absolute maximum
> performance from each and every query you have.
> And you will need lots of hard drives.  By lots I mean dozen(s) in a
> raid 10 array with a good controller.  Thousands of concurrent users
> means hundreds or thousands of transactions per second.  I've personally
> seen it scale that far but in my opinion you will need a lot more hard
> drives and ram than cpu.
> ---------------------------(end of broadcast)---------------------------
> TIP 7: don't forget to increase your free space map settings

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to