On 2/20/07, Rob van der Heij <[EMAIL PROTECTED]> wrote:
On 2/20/07, Tom Duerbusch <[EMAIL PROTECTED]> wrote:
> In what cases is this a problem? I want to know, in what circumstances
> I'm going down a path, that is different than I think.
I fear I don't know the details anymore of that particular event. All
I remember is that fixes were required to make Linux use mini disks
allocated on a later or different model of the ESS. But if you look at
a snippet of source code like here for example
http://lxr.linux.no/source/drivers/s390/block/dasd_eckd.c?v=2.6.18;a=s390#L67
you can see the information is there and there obviously was a reason
to differentiate.
And I certainly know about one case where disk I/O performance was
worse on new DASD because of such differentiation. Would you believe
that still in this century our friends in Boeblingen implemented RPS
in the disk driver?
Linux is not unique in this. I still remember that the SE of our
non-IBM DASD implemented (unplanned) a performance fix for the
microcode. All it did (as we learned after brown stuff some air
movement devices) was to set a bit in some status field returned to
the OS. And z/OS used that to identify features of 3990 control units.
While the concept did not apply to this RAID device, it fooled z/OS
enough to exploit some of the features. Unfortunately, VM passed that
status field also to VSE guests that then failed to recognize the
control unit and gave up.
> For example, right now, our zLinux disaster recovery option is to find
> a replacement processor (perhaps the next processor off of IBM's line),
Would you also be willing to take the load of a z990 to someone's z9
for disaster recovery? Back then when I was involved in such things I
do remember we applied preventive maintenance on VM for future models
because of such arrangements. But VM does not isolate the guest from
those details (as folks using ISV products with CPUID based license
code know). We just have been spoiled by CMS behaving so well that we
take this all for granted.
I often disagree with Alan (also) about this. Probably because I
sometimes do get out ;-)
I believe VM should reveal far less details about the actual resources
so we don't challenge developers to take advantage of the details that
shine through our virtualization. I believe there's often no value for
a class G user to know what real device or real volser his mini disk
is - it's a disk and it's so big, and that's what you need to know.
Let alone a Q MDISK that reveals the actual owner for indirect links.
Yes, we had a smart developer who found which disk $FORTRAN 200 really
was and improved his performance by linking that directly. So after I
upgraded he still used the old version and reported problems that I
was sure to have fixed...
If we did not have VSWITCH, most certainly the OSA devices would have
caused a nightmare too. The problems we have seen there are mostly
because they chose to simulate existing hardware to avoid writing one
extra clean and elegant driver in Linux.
It all comes down to abstraction and architecture. VM needs to provide
an abstraction of the true resources in such a way that the virtual
machine can make clear what function is needed, and CP can then
deliver that in the "best" way (weighing technical and economical
aspects). The challenge is to have an abstraction that is powerful and
flexible enough, and yet efficient to implement and compatible till we
retire. Normally that means a high-level interface. To have both VM
and the guest imitate a low-level interface of a device that was
extinct already 25 years ago, that's a burden to both...
To define such abstractions is hard work, but I strongly believe it is
more effective than trying to imitate the latest hardware. If it were
my choice, Linux on z/VM would just be using (an improved version of)
the diagnose I/O interface rather than doing channel programs.
Yes, I will shut up now. Rob
----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
We installed 4 SLES 9 on a z890 2 years ago and the last one was
migrated to zVM about a month ago. The reasons are simple:
1) financials: we can now over commit memory, we do it 2:1. Since
we centralized a lot of functions that used to be done on Linux on
zVM, we save Sys Admin time not only in creating and configuring Linux
systems but in administration. I.e. we can support more systems per
staff member. Our DR drills goes faster so we save test time that we
lease once a year. We share FICON and OSA a lot easier with zVM
saving us hardware. We share binaries across LPAR's saving us disk,
memory and some CPU.
2) performance: we can now manage high priority guests along side less
important guests much easier. When running under LPAR we had the
tendency to build multiple databases and multiple applications per
Linux. Under zVM, we can divide into many guests each with a SET
Share that reflects our business requirements.
3) accounting: we can now see what each application and component of
applications really use. Before we depended on accounting data which
under Linux is very incomplete (compared to aix or sun). With zVM and
by dividing into small guests, we can account better for what we use
to run those applications and relate them back to budget and costing.
4) capacity: we can now collect monwrite data for all guests and
manage them much easier than trying to collect the data from sar for
all Linux LPAR's. As a matter of fact, we discontinued such practice
saving us 3rd party tools and processing.
5) we experienced a lot of the benefits pointed out in this list. One
of them is that VSWITCH is a lot easier to implement than VIPA.
There are disadvantages as well. No matter how you look at it, we now
have zVM to install, patch, support and maintain. We did not have zVM
expertise but were lucky that our zOS systems programmers took
ownership. We now pay more to IBM for licenses. However,
financially, we did a 3 year ROI and came up 30% ahead of the game
compared to going AIX or SUN (our other platforms). When we compared
Linux LPAR and zVM we had no choice. We are growing our Linux farm
so fast that managing the LPAR's for each Linux, memory separation and
so on was not possible. The last holding place was performance but
when we showed that for a small CPU penalty, the advantages of Linux
SWAP spaces under zVM virtual-disks gave us such advantage for
controlled impact on performance that our customers quickly backed up
the added complexities.
Hope this helps.
----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390