If CPU capacity is truly limiting you, you should look into connecting to native fibre channel disk if it is practical. Since you are using FICON, you definitely have the capability in the CPU, so the issue would whether it would be practical to convert some of your ESS interfaces and disk images to native fibre channel. And of course you would have to rebuild your Linux environment.
The main reason FCP might work better than DASD/FICON is that the ECKD conversion layer in the DASD driver takes significant CPU cycles in converting the native Linux block I/O to ECKD commands/responses. There are no guarantees, but the native FCP might work better for you. Scott Ledbetter StorageTek -----Original Message----- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Barton Robinson Sent: Thursday, December 11, 2003 9:47 AM To: [EMAIL PROTECTED] Subject: Re: DASD Performance problem Ok, you convinced me that your DASD performance problem was really a CPU problem. If your processor hit 80%, then the absolutely best case would be 25% more work for the run that hit 80%. You are very much limited by the processor speed. Your processor could only reach 100% if the overlap between the I/O operations and the processing time was sufficient. There is likely quite a bit of time where the guest was in I/O wait, couldn't use more cpu. Easy to prove this, run two linux guests. then you will have overlap - assuming it is to different disks. Your cpu will go to 100% and you will get the max i/o thruput. If you really want to go down the I/O route, there are other possibilities - is this a cache friendly excersize? If the data is all writes, was the write cache full? If data is read, was it all in the cache? at ficon speeds, and just seeing an excellent presentation on measuring ficon, how busy was the ficon? the monitor data contains this information. >From: Jason Herne <[EMAIL PROTECTED]> > >Interesting thought, but the cpu it never 100% utilized when the >testing is taking place. At the higher end testing (21 to 24 dbench >clients) it gets up there around 80%.. but never 100%. > >- Jason Herne > >On Wed, 2003-12-10 at 21:07, Barton Robinson wrote: >> I would say your two questions are very much related. >> I'm betting that your processor on your z/800 is maxed out. And since >> the intel processor is approximately 3 times faster than your z/800, >> guess what, it performs that much more work. More cycles allows for >> more work. >> >> >> >> >> >> >I think we may have a performance problem. >> > >> >Our z/800 model 0LF is connected to an ESS800 with two Ficon channel >> >paths. We have been running dbench in Linux for quite a few weeks >> >now and we are seeing numbers much lower than we expected. Can >> >someone commnet on this? Are these numbers about what we should be >> >seeing, or is something wrong with our setup? >> > >> >We're running RHEL with kernel 2.4.21-1.1931.2.399.ent #1 SMP. >> >Linux is using a single 3390 DASD with ext3. >> > >> >Here are the numbers we're getting: >> >all tests were run on a single guest with no other Linux guests and >> >just a few z/VM service guests running. No disk intestive or CPU >> >intensive workload was running during the testing. >> > >> >dbench clients avg throughout (MB/s) >> >1 123.111 >> >3 116.729 >> >6 99.0626 >> >9 95.3577 >> >12 95.2825 >> >15 91.6009 >> >18 92.7745 >> >21 91.8808 >> >24 73.8885 >> > >> > >> >Here is what we get with our $1000 Dell Pentiium 2.4Ghz server with >> >a SCSI disk. >> > >> >1 395.545 >> >3 281.957 >> >6 275.292 >> >9 285.756 >> >12 262.4333 >> >15 248.314 >> >18 237.879 >> >21 221.74355 >> >24 200.873 >> > >> > >> >As you can see, the $1k Dell is hammering our $250k mainframe. We >> >are currently trying to figure out why this is and hopefully fix the >> >problem if there is indeed a problem... Any comments or help that >> >anyone could give woule be appreciated. >> > >> >- Jason Herne ([EMAIL PROTECTED]) >> > Clarkson University Open Source Institute >> > z/Server Administrator "If you can't measure it, I'm Just NOT interested!"(tm) /************************************************************/ Barton Robinson - CBW Internet: [EMAIL PROTECTED] Velocity Software, Inc Mailing Address: 196-D Castro Street P.O. Box 390640 Mountain View, CA 94041 Mountain View, CA 94039-0640 VM Performance Hotline: 650-964-8867 Fax: 650-964-9012 Web Page: WWW.VELOCITY-SOFTWARE.COM /************************************************************/
