Michael, That's why I wrote "I am aware of". I have not very close look at PC servers market - it is moving very fast. So I am asking detailed specifications on demand. Will have a look at this ones you point to. I was not precise - in the past they were secondary, now are primary but small. If we count "full" buses - you can put FC (200-240 GB + duplex) or RAID adapter (usually 3-4xUltra160) only on 64-bit/66MHz. You still can have GB Ethernet on 64-bit/33MHz "half" bus (alone !). And for more than 20-50 nodes (depend on size) you will need several LAN adapters if GB and tens of Fast Ether. For ServerWorks chipset - it may support up to four buses but how many are
routed on the planar and at what speed. For example we now are installing TSM for a customer on IBM x342 (it is based on this chipset). The server is having 3 primary buses but only one is 64-bit/66 MHz and two slots on it (others are 64/33MHz and 32/33). So IBM itself states you can have ServeRAID or FC adapter in it but NOT both. Three buses with 1.1GB/s (it ought to be 1066 MB/s) means two primary and one secondary, i.e. one 2 Gb/s FC HBA + 2x 1Gb Ethernet or 2x 2GB/s FC. More with congestion and performance degradation. But what if bus 1 is 64-bit/66MHz (533) and 2,3 are 64/33 (2x 266). You already lost the option 2x FC. Having into account SCSI for boot/paging + (1 or 2) 100 Mb Ethernet .... OTOH 4x 64-bit/66MHz = 4x 533 MB/s = 2133 MB/s. If for marketing reasons rounded up and later multiplied it would give you 4x 533 MB/s = 4x 0.6 GB/s = (those fictious) 2.4 GB/s. I've studied mathematics and we are calling this propagation of the error. On the example above you can see what IBM shows (I would not say cheating). Others do the same. So again ask hard questions: - how many buses - how many bridges - width and speed of *each* slot - which slot to which bus (IBM x342 case - bus A 32/33, slot 1; bus B 64/66, slots 2&3; bus C 64/33, slots 4&5) Better ask them to draw a picture what goes where and how is connected (you will become their nightmare :-) Those 2-way boxes have no enough real estate on the planar for full routing of all buses. Thus one bus is routed with many & short wires - 64-bit/66Mhz. Second is routed with many but long wires - 64-bit/33MHz. Last used what left from others - only 32-bit/33MHz. For the fourth port of the chipset there is no place to route the wires to a slot - cut off. So first bus is good for FC/GB Ether/RAID, second for SCSI/small RAID/some Fast Ether, third for Fast Ether + boot SCSI + ISA (management, serial, kbd, mouse,etc.) BTW: I checked again x440 and proved myself wrong in part - still four buses per drawer but only 2 drawers, i.e. max I/O 4GB/s. My intention is not to scare you or irritate but to show you the hidden part of the truth. That's why I will submit this to the list on copy. And on the end you are missing an important point measuring the bus throughput. Data does not go "Ether ---> disk" or "ether --> tape". Transfer is "LAN --> memory buffer --> FC/SCSI". And if you setup diskpool for staging data goes "LAN --> memory --> FC/SCSI (disk) + FC/SCSI (disk) --> memory --> FC/SCSI (tape)", i.e. four transfers. So calculate all the simultaneous clients (not all can send direct to tape unless you have very huge silo). Sorry, this again became too long explanation. I still think with the figures you have shown current H70 should not need too much upgrade (if any). Do you have any slots free there? Zlatko Krastev IT Consultant Please respond to [EMAIL PROTECTED] To: Zlatko Krastev <[EMAIL PROTECTED]> cc: Subject: Re: Win 2K VS AIX On 12 May 2002 at 16:22, Zlatko Krastev wrote: > All 2-proc SMP Intel boxes (I am aware of) are having one primary PCI bus > and secondary through PCI-PCI bridge. Thus all the I/O performance such a > box can have is 533 MB/s total (DB disks, stgpool disks, tapes, LAN). Hello, I don't want to recommend running TSM on W2K. But I am looking for such servers for other purposes and your statement irritates me. For example Intel's dual PIII board SDS2 is described as follows: "Triple Peer PCI buses: Separate PCI buses to help eliminate data bottlenecks, increase bandwidth for intensive I/O needs, and provide up to 1.1 GB/sec of data transfer." Many 2-proc SMP Intel boxes are based on the ServerWorks HESL chipset which even supports four independent buses with a bandwith of 2,4 GB/s. A board with four 64/66 PCIslots and 1,1 GB/s bandwidth could handle e.g. 2 x U160 SCSI RAID (onboard) + 2 x 2Gb FC + 2 x 1Gb Ethernet, which seems quite a lot to me - at least for a medium sized environment like ours. I can't find numbers in the descriptions of those 10K$ dual PIII SMP boxes of IBM, Dell or FSC, but all are talking about "Triple PCI Architecture or "Two independent PCI-buses". Are they trying to fool me? Kind regards, Michael Bruewer ---- Dr. Michael Br"uwer RZ der Univ. Hohenheim 70593 Stuttgart [EMAIL PROTECTED] www.uni-hohenheim.de/~bruewer Fon: +49-711-459-3838 Fax: -3449 PGP Public Key: RSA: http://www.uni-hohenheim.de/~bruewer/pgpkey_V2
