To some extent, I agree with your VM systems programmer.  IF you are in
a VM paging state, adding actively used vdisk, may not be the best thing
for you to do.

When I had 17 images of Linux (small images) on my MP3000 with 1 GB
memory, along with 7 VSE systems, any big hit in CP paging, caused CICS
to take a major performance hit.  At that time I tried some vdisk and I
made the mistake of using a single, large vdisk.  It didn't seem to
cause any problems.  Then I applied a service pack to Linux.  I knew it
would take a lot of CPU resources, so I dropped the VM share to REL 10.
Fine, it will take an extra hour.  No problem.  What I didn't expect,
was that my 128 MB vdisk was now going to be actively used and reside in
main memory.  All other systems started to get paged out (and processes
had to wait to be paged back in).

After that, most all my swap was on real disk.  If a Linux machine
started swapping, it would only tie up 1 escon channel and really only
effect that machine.

Now, on my z/890, real memory isn't so much of a problem, yet.  And I
do have three sets of vdisk, at increasing sizes with different
priority.  The small, high priority vdisk, seems to stay in storage.
The second one, that is occassionally used, I never see in storage.  The
3rd larger, low priority one, doesn't get used (but I haven't tried to
apply a service pack either).  Also with Ficon/2 channels to a DS6800,
the I/O subsystem laughs at anything I  throw at it.

But vdisk has other "memory" benefits.
1.  It doesn't populate MiniDiskCache.
2.  When it is actively used, it doen't populate dasd controller cache.
 However, it might be in controller cache when the vdisk is paged back
in.

I also haven't used the saved segment idea, yet.  Seemed like too much
trouble for the MB per image savings.  I don't clone the machines
either.  Just can't get my act together.  Most of my machines have
"differences".  Mostly I'm still on EXT3.  But Reisner if it is a Samba
machine.  Some machines are a few disks, while others are 6-12 disks in
a LVM collection.  I just haven't come across a "pool" of machines that
are basically the same.  But if/when the saved segment thing gets as
easy as saving CMS is, I may revisit that issue.  And I can install a
zLinux machine, from scratch in 2 hours, customized for the amount of
dasd and filesystem, with maintenance applied.

I tried sharing /usr, but it seemed like everytime I turned around, I
was breaking it.  i.e. made changes and now had to cycle all the
machines that were sharing it.  By time I would get the procedures down,
the next release of SUSE would come out and I would start all over again
<G>.

I'm sure there is a point, where XX images that are nearly the same,
will start justifying some of these techniques.  I would think it is
more than 3, but how many more, I don't have a clue.  Anyway I don't
have more than 3 images that look the same, currently.  But if I get
more convinced that the effort is worth it, I can force more machines to
look simular to achieve more of a pay back.

Tom Duerbusch
THD Consulting

>>> [EMAIL PROTECTED] 2/23/2007 9:00 AM >>>
Sadly, no. I am not joking. I was told to remove it, so I did. Under
protest. Had it working perfectly. I think it's time to re-visit this,
however.
We have more experience here now, and more than one person who knows
something about VM. (considerably more than the person whom originally
told me to
remove the Vdisk.)

 What we're being directed to do here at the moment is do shared
kernel. I'm running a MAXIMUM of 11 images right now. Three of them on
the Test VM
LPAR.  So what is the benefit? I have two 'test' systems on the prod
VM. One is my cloning/update server Two prod websphere,  one prod DB2
Connect/CVS/Bugzilla/MySQL instance and 2 QA Websphere instances.  In
that environment I am going to need at least two shared kernel images.
Probably
three. So that will buy me the memory footprint of maybe five images at
maximum.

I see no upside to sharing the kernal in this environment. I can see a
multitude of problems that it will cause. I am even willing to bet
putting that
support into the kernel will cause it to be considerably larger than it
is now, so as to eat any miniscule gain potentially available. What it
will do
is greatly complicate my ability to put on maintenance, back off
maintenance and otherwise keep things running smoothly. I will, with
100% certainty
going to be told to do it anyway.

Linux on 390 Port <[email protected]> wrote on 02/22/2007
05:58:27 PM:

> You are joking, right?
>
>
>
> James Melin wrote:
>
> > A little caveat here. We originally used vdisk, but I was asked to
> stop using it by our Operating System person. The reson being at the
> time we were
> > memory constrained. Not so much any more but we're still using
> more expanded storage ratio wise  than I am comfortable with to turn
> vdisk on.  I now
> > return you to your regularly scheduled message.
> >
> >
> >
> >
> >              Dave Hansen <[EMAIL PROTECTED]>
> >              Sent by: Linux on 390 Port
> >              <[email protected]>
     To
> >
> [email protected]
> >
     cc
> >              02/22/2007 01:23 PM
> >
Subject
> >
> Perceptions of  zLinux
> >                             Please respond to
> >                Linux on 390 Port <[email protected]>
> >
> >
> >
> >
> >
> >
> >
> >
> > Hello Group,
> >
> >       We are looking at possibly reducing our memory usage and
> making our SLES9 z/Linux "better".  There are different opinions
> about what to do and
> > what the costs are.
> >
> > A). VM Memory Disk (VDISK).  Currently we do not use VDISK for our
> production zLinux servers on our z/VM 5.2 system.  I see SLES 10
> recommends two
> > VDISKs.  Is there a downside to using VDISK?  About the only thing
> I saw is that VDISK doesn't do expansion or contraction, so you may
need to
> > "monitor" its usage.  The popular thing I can see is to VDISK the
> swap filesystem, which we don't do.  It sounds to me that even if
> you only have one
> > zLinux you want to use VDISK (at least for swap).
> >
> > B). Shared Kernel.  This is an NSS and not just a DCSS for the
> filesystems like "/var" (I think "/" or root is not supported in a
> DCSS anymore).  So
> > this is what I IPL.  If it's not there, I can't IPL.  We have been
> kicking this idea around for a while.  But with only a dozen
> penguins of different
> > configurations (WebShere or not, monitors etc.), I'm not sure we
> would gain much.  Again I looked at what it costs.  I found this in
the IBM
> > literature:  "Every virtual machine that IPLs your shared system
> must have the same disk configuration as the system that was saved.
> That is, the
> > disks must be at the same addresses and be the same sizes as the
> virtual machine that issued the SAVESYS command".  What this means
> to me is we need
> > every zLinux that IPLs the NSS has to have the same filesystems
> (in size and number).  Furthermore I found that previously it looks
> like LKCD (Suse
> > Crash Dump) didn't work well when you IPLd an NSS.  All I can find
> now is that there appears to be a new LKCD2.  I also found LKDTT,
> but it requires a
> > kernel patch and a kernel re-compile.  I thought re-compile =
> unsupported (tainted) zLinux?  So then who would want your dump?
Dowe need to
> > re-compile the kernel for "crash dump" support?  Does it matter if
> we IPLd from an NSS?  Are there any kernel parameters related to
> using an NSS?  I
> > saw a post from this year that talks about "Boot from NSS support"
> and a parameter "SAVESYS=".  If we go through with this for only a
> few penguins is
> > this worth it (besides having a good procedure to grow the farm)?
> >
> >
> >    Thank you,  Dave H.
> >
> >
----------------------------------------------------------------------
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to [EMAIL PROTECTED] with the message: INFO
> LINUX-390 or visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> >
> >
----------------------------------------------------------------------
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to [EMAIL PROTECTED] with the message: INFO
> LINUX-390 or visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> >
> >
> >
>
>
>
----------------------------------------------------------------------
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to [EMAIL PROTECTED] with the message: INFO LINUX-390
or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> [attachment "barton.vcf" deleted by Jim M. Melin/IT/Hennepin]

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390
or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

----------------------------------------------------------------------
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Reply via email to