(hope I'm posting this correctly)

You wrote:

>First question is do we gain anything by moving the RH Enterprise
>version of Linux in terms of performance, mainly in the IO realm as we
>are not CPU bound at all? Second and more radical, has anyone run
>postgreSQL on the new Apple G5 with an XRaid system? This seems like a
>great value combination. Fast CPU, wide bus, Fibre Channel IO, 2.5TB
>all for ~17k.

Wow, funny coincidence:  I've  got a pair of dual xeons w. 8G + 14disk
fcal arrays, and an xserve with an XRaid that I've been screwing around
with.  If you have specific tests you'd like to see, let me know.

--- so, for the truly IO bound, here's my recent messin' around summary:

In the not-so-structured tests I've done, I've been disappointed with
Redhat AS 2.1.  IO thruput.  I've had difficulty driving a lot of IO
thru my dual fcal channels:  I can only get one going at 60M/sec, and
when I drive IO to the second, I still see only about 60M/sec combined.
and when I does get that high it uses about 30% CPU on a dual xeon
hyperthreaded box, all in sys (by vmstat).  something very wrong there,
and the only thing I can conclude is that I'm serializing in the driver
somehow (qla2200 driver), thus parallel channels do the same as one, and
interrupt madness drives the cpu up just to do this contentious IO.

This contrasts with the Redhat 9 I just installed on a similar box, that
got 170M/sec on 2 fcal channels, and the expected 5-6% cpu.

The above testing was dd straight from /dev/rawX devices, so no buffer
cache confusion there.  

Also had problems getting the Redhat AS to bind to my newer qla2300
adapters at all, whereas they bound fine under RH9.   

Redhat makes the claim of finer grained locks/semaphores in the qla and
AIC drivers in RH AS, but my tests seem to show that the 2 fcal ports
were serializing against eachother in the kernel under RH AS, and not so
under RH9.  Maybe I'm useing the wrong driver under AS. eh.

so sort story long, it seems like you're better of with RH9.  But again,
before you lay out serious coin for xserve or others, if you have
specific tests you want to see, I'll take a little time to contrast w.
exserve.  One of the xeons also has an aic7x scsi controler w 4 drives
so It might match your rig better.

I also did some token testing on the xserve I have which I believe may
only have one processor (how do you tell on osX?) and the xraid has 5
spindles in it.  I did a cursory build of postgres on it and also a io
test (to the filesystem) and saw about 90M/sec.  Dunno if it has dual
paths (if you guys know how to tell, let me know)

Biggest problem I've had in the past w. linux in general is that it
seems to make poor VM choices under heavy filesystem IO.  I don't really
get exactly where it's going wrong , but I've had numerous experiences
on older systems where bursty IO would seem to cause paging on the box
(pageout of pieces of the oracle SGA shared memory) which is a
performance disaseter.  It seems to happen even when the shared memory
was sized reasonably below the size of physical ram, presumably because
linux is too aggressive in allocating filesystem cache (?) anyway, it
seems to make decisions based on desire for zippy workstation
performance and gets burned on thruput on database servers.  I'm
guessing this may be an issue for you , when doing heavy IO.  Thing is,
it'll show like you're IO bound kindof because you're thrashing.

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to