I don't believe that there is a performance problem with thousands of
volumes that are there.  The performance problem is with the thousands
of volumes that are not there.

In my case, when we brought in an IBM DS6800, I defined to the IOCP,
that there were 150 volumes on each of the 8 controllers.  However, I
only define volumes on the DS6800 when I need them.  Some are mod-3s,
some are mod-9s, even some 3380s.

I'm running VM, so my guests only see the volumes I give the guest
machine, but in the native VSE world, when dynamic device sensing came
into play, and VSE was in a LPAR, VSE IPLs took forever.  Seemed that
each, non-existant device, waited for the missing interrupt handler to
get tripped, before it would sense the next volume.  I seem to recall
that this was also a problem in VM.  

My guess, is that Linux would have the same problem.  So you wouldn't
want Linux to go out sensing all devices when you are in a LPAR world.

Tom Duerbusch
THD Consulting

>>> Ivan Warren <[email protected]> 2/18/2009 2:03 PM >>>
Mark Post wrote:
> Basically, some historical, performance, and data integrity reasons.
>
>   
Ok,

I'm starting to get a better picture now (as to the how & why). As I 
understand it, the bases are :

- An LPAR may have thousands of volumes allocated to it, not all of
them 
being for Linux use.
- Volumes not intended for linux use may be accidentally stepped on..
- IPL time ensuing from having many thousand devices (Lordy.. Issuing 
Sense-ID, RDC and read of cyl 0 track 1 - on 10000 devices shouldn't 
take but a couple seconds anyway.. What's wrong here.. all modern OSes

do this in a routine manner.. why wouldn't Linux be any different ? 
(ok.. maybe I should check with the folks in Böblingen) - is taking an

inordinate amount of time.

Right..

However (you knew that was coming right ?).. And besides the 
'historical' portion which is.. well.. historical..

In an LPAR environment where you (may) have thousands of volumes, with

maybe a few percent for Linux use (which is probably a bad idea to
start 
with - but I digress).. Why doesn't mkinitrd *ONLY* take care of the
IPL 
volume (or volumes in you're LVM).. - as the initrd was designed - then

- depending on what config is on the root fs, enable this or that
volume 
- once control has been passed to the root fs hosted init (and the 
pivotroot() has been done) ? The list of configured volumes (those that

are designated for linux use) are bound to be available on the root fs

anyway - so why not do it *then* (and not during the time when the init

on the initrd is in control) ? IIRC, /proc has enough control over 
dasd_eckd (which is the really the one at issue here I think) to ask it

to vary online this or that volume *even* after the initialization
phase.

Then of course, you have the VM environment.. which is going (by
design) 
to be especially designed for your environment.. Adding a volume should

only be a matter of adding a minidisk to the user's directory (or maybe

a link if the fs is designed to be RO.. through the directory or maybe
a 
CMS profile..).. and modifying the fstab.. Having to alter the initrd 
seems to me like a unnecessary and superfluous step.

What I am saying, is that, eventually, you're going to wind up with 
people running zipl/mkinitrd no matter what.. but there is *some* 
(minor) danger to this ! (actually, of course, it's going to be 
mkinitrd/zipl - in that order.).. if one step succeeds and the next 
fails - the system can't be booted *AT ALL* (basically - try running 
mkinitrd without running zipl !).. Not to mention that - if you
consider 
this as a guardrail.. it fails to accomplish its goal once you have 
everyone routinely doing this (the safety door 'symptom'.. being
blocked 
opened by a sander block because people are tired of swapping their 
badge to open a door they go through 100 times a day - also look at the

infamous Vista UAC :P )..

Again, this is *not*.. (I repeat .. *NOT*) a rant.. just throwing in 
ideas of how I think things could (or maybe .. should ?) be done - on a

mainframe.. with a mainframe population - used to have complete control

of their environment (be it the whole thing, an LPAR or a virtual 
machine) - in a linux environment.

And again.. Mark.. Thanks again for being here - and - in this 
particular case, taking the time to answer my inquiries ! And of
course, 
I'm just waiting to be proven wrong and change my mind about the whole

darn thing !

--Ivan


----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [email protected] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to