More advantages of S390 dasd  over SCSI (preceeded by soap box speech):
In a 'political environment' where you are just starting to show non-mainframers what 
linux/390 can do, it is just is easier to start with 'we can do what you do' and 
introduce the 'why s390 is sometimes better for certain things' later.
When we mentionned that the S390 dasd was faster than our SAN to the storage unit 
their boss insisted 'It can't be. The SAN is all glass.'  Some battles are not worth 
fighting until the time is right. On the other hand it was interesting
that the folks who support the Oracle data bases noticed how good I/O was on the S390 
dasd. Yet it is a standard here that all Oracle data bases go on the SAN. Why? Because 
it is what we have for mirroring data on SUN and intel servers
to the other data center.
Linux/390 can do the same. But yes, for S390 it's not the only option. We can do PPRC. 
We can also have the detailed reports performance products (like ESAALPS) provide.  In 
the proof of concept done here we found that in the server
world support people concentrate on only the servers they support and don't always 
look at the big picture. They didn't realize how much I/O was really being done 
between the windows and SUN servers. Also they might be quick to blame the
'network' for problems (even when you have reports showing cpu contention). The 
detailed reports were more than they could deal with.
So sometimes it's easier to start with the 'we can do what you do' and 'for less 
money' approach. By using the SAN you can eliminate one difference and just 
concentrate on the savings from the software license fees and other savings.
Once you get past basic acceptance then you can point out the advantages. Though I 
will say that the one thing we could do that did impress the server folks was server 
cloning. We have STK snapshot. By using snapshot (which uses pointers
rather than full I/O) we were able to create new servers in about 3  1/2 minutes. So 
in addition to multiples buses and a separate I/O subsystem don't forget the advantage 
of cloning under S390. IBM, STK and EMC all have their own
versions of 'snapping' a copy. Also, snapshot gives you 7X24. Disaster recovery 
backups can be from snapped volumes. And remember if you use snapshot you do need to 
dedicate volumes, but for 7X24 and the ability to do DR backups with the
system up it can be worth the dasd it takes.



About the bus limit:  This is one point that non-mainframe folks really don't 
understand.  I have found that not one person I talk to about mainframes understands 
that the mainframe can transfer data on multiple buses simultaneously.

>
> On the other hand I wonder if mainframes understand that most other systems cannot 
> transfer data on multiple buses simultaneously.  I think that the high end Sun 
> servers can, and maybe the high end HP systems.
>
> This is the big difference in mainframe architecture.  We should make more of an 
> effort to get this message out.
>
> -----Original Message-----
> From: Malcolm Beattie [mailto:[EMAIL PROTECTED]
> Sent: Friday, February 27, 2004 3:09 AM
> To: [EMAIL PROTECTED]
> Subject: Re: z/VM access to EMC (was: Accessing DASD on a Shark from
> Linux under z/VM)
>
> Jim Sibley writes:
> > > > I'm curious. One of the benefits touted, and true,
> > about
> > > Linux on zSeries
> > > > vs. some other platform, is the zSeries' strength
> > in I/O.
> > > Is this still true
> > > > with FCP attached SCSI DASD? Why would the zSeries
> > drive
> > > SCSI DASD better
> > > > than Intel or Sun?
> > > > John McKown
> > > > Senior Systems Programmer
> >
> > Basically you can attach more dasd space and have more
> > simultaneous (NOT just concurrent) data transfers
> > going on at the same time.
> >
> > The I/O advantage of the mainframe is that it usually
> > has more paths (256 channels) to more devices(65,536)
> > thus giving a lot more parallel I/O, not that any
> > particular device is more efficient. If you have a lot
> > threads active, more I/O can be done in parallel that
> > most intel and other boxes.
> >
> > With 256 channels at say 12 MB/sec (shark) on , the
> > total aggregate rate of the mainframe would be about 3
> > GB/sec. Obviously, that's limited by the 2 GB backend
> > buss on the TREXX.
>
> The general idea is right but the bus limit is wrong: 2GB/sec I/O
> for an entire box would be very poor. Rather than have zSeries damned
> with faint praise, allow me to hype up its I/O capabilities a bit more.
> 2GByte/sec is the speed of a single STI bus and the smallest T-Rex (one
> book) has 12 STI buses while the largest (four books) has 48 STI buses
> for a total of 96GByte/sec bandwidth. Channel cards, whether ESCON or
> FICON, are spread over domains/slots to take advantage of the STI buses
> available. You can't fill all of that bandwidth with DASD I/O
> (there's a limit of 120 x FICON 2Gbit/sec ports--60 features on
> z990--making a nominal 24Gbyte/sec) but it's way more than 2GB.
>
> ESCON hits the limit of number of channels way before any
> hardware bandwidth limit but even so you only have 16 ESCON ports
> per card. Each STI bus fans out to four slots and, for ordinary
> I/O, gets multiplexed down to 333MByte/s, 500MByte/sec or
> 1000MByte/sec as appropriate. For ESCON, it uses 333Mbyte/s (which
> nicely encompasses the 16 x 20MByte/s nominal signalling for an
> ESCON card) and for FICON, 500MByte/sec (which nicely encompasses
> the 2 x 200MByte/s nominal for the dial-port FICON-Express cards).
> The buses and features are, IMHO, very well designed to ensure that
> there are no bottlenecks or caps right through to the backend memory
> bus adapters (MBAs) of the memory subsystem. For those interested in
> the details, Chapter 3 of the "z990 Technical Guide" redbook
> (SG24-6947) from www.redbooks.ibm.com elaborates on this and
> describes it very well.
>
> > Also, the main frame typically has 2 processors
> > dedicated to driving the devices (SAPs), so less "real
> > cpu" is used for I/O.
>
> In fact, not just the SAPs (which deal with initiating the I/Os).
> Each channel card also is fairly powerful and has the responsibility
> of doing much of the I/O work itself. For example, each z900 FICON
> card has two 333MHz PowerPC processors (cross-checked for reliability)
> to do the work. Again, for lots of detail, see the "z900 I/O subsystem"
> paper by Stigliani et al in the z900 edition of the IBM Journal of R&D
> (Vol 46 No 4/5 Jul/Sept 2002).
>
> --Malcolm
>
> --
> Malcolm Beattie <[EMAIL PROTECTED]>
> Linux Technical Consultant
> IBM EMEA Enterprise Server Group...
> ...from home, speaking only for myself

Reply via email to