To preface this, this all started when our z/OS tcp/ip person tested z/OS
to z/OS hipersocket performance and found it nearly the same as using GBE
osa connection. I immediately went 'huh' and went on to get hipersockets
configured in the z/VM Linux guests.

I then ran some testing after doing so, to get some sort of benchmarks.

Since I had trouble believing the numbers this person  was getting (2.70
MB/sec for hipersocket xfer from z/OS LPAR to z/OS LPAR) I did my own
testing and got the following:

I'm surprised by the Linux to z/OS and z/OS to z/OS performance.  If I am
going to use this to whip db2 data about at warp speed, i have GOT to fix
something. Any ideas/interpretations of this are welcome.


results of sending file from z/VM Linux guest pepin to Linux LPAR nokomis
using GBE OSA (z/900 to z/800)

100% |*************************************|   593 MB   13.45 MB/s    00:00
ETA
226 File receive OK.
621993984 bytes sent in 00:44 (13.45 MB/s)

results of sending the file from z/VM Linux guest pepin to z/VM Linux guest
calhoun using the hipersocket interface address of calhoun

100% |*************************************|   593 MB   39.02 MB/s    00:00
ETA
226 File receive OK.
621993984 bytes sent in 00:15 (39.02 MB/s)

results of sending the file from z/VM Linux guest pepin to z/VM Linux guest
calhoun using the OSA IP address of calhoun

100% |*************************************|   593 MB   16.18 MB/s    00:00
ETA
226 File receive OK.
621993984 bytes sent in 00:36 (16.18 MB/s)

results of sending the file from z/VM Linux guest pepin to z/OS LPAR owl0
using the OSA IP address for owl0

125 Storing data set /it/public/Su810_001.iso
100% |*************************************|   595 MB    2.68 MB/s    --:--
ETA
250 Transfer completed successfully.
624885855 bytes sent in 03:41 (2.68 MB/s)

results of sending the file from z/VM Linux guest pepin to z/OS LPAR Owl0
using the hipersocket interface address for Owl0

125 Storing data set /it/public/Su810_001.iso
100% |*************************************|   595 MB    2.86 MB/s    --:--
ETA
250 Transfer completed successfully.
624885855 bytes sent in 03:28 (2.86 MB/s)

Just to compare, I allocated a flat file of sufficient size and tried the
send again, after allowing for MVS dataset naming conventions. This was to
factor out the performance factor of the HFS file system vs native.
Marginally faster.

125 Storing data set SY4080.SU810.CD001.ISO
100% |*************************************|   595 MB    2.93 MB/s    --:--
ETA
250 Transfer completed (data was truncated)
624885855 bytes sent in 03:23 (2.93 MB/s)

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to