Re: Documentation for Linux on z Systems and KVM - new

2015-10-01 Thread Sergey Korzhevsky
Hi Mark,

   Thank you for detailed answer, but, actually, your vision is not
contradicted with mine, you just confirmed that the huge jump was made in
the last decade because of Linux on mainframe (in z/VM particularly).
I hope that with people like you, the speed of progress won't be slow
down.


WBR, Sergey




Mark Post 
Sent by: Linux on 390 Port 
28-09-15 23:39
Please respond to Linux on 390 Port

To: LINUX-390@VM.MARIST.EDU
cc:
Subject:Re: Documentation for Linux on z Systems and KVM -
new


>>> On 9/28/2015 at 02:25 AM, Sergey Korzhevsky 
wrote:
> Alan Altmark wrote:
>>> What is it about z that makes virtualization work better?
>>50 years of work on it?
>
> That is interesting answer. One thing came to my mind is the live guest
> relocation. As far as i could find, VMware introduced that feature
> (vMotion) in 2003, z/VM - in 2011. The same regarding network
> (GuestLAN/VSwitch).
> So, looks like z/VM slept all that years and was wake up by x86 world
> recently.

Having been an active participant and observer of the community for a
while now, I think I can contribute some perspective.  (From what I can
tell, you have been also so I find your comment a little surprising.)

When Linux for the mainframe was first introduced, a lot of facilities we
take for granted today didn't exist.  Guest LANs, VSWITCHes, cooperative
memory management and so on.  That started to change pretty quickly.
Things that actually helped running more than just a few instances of
Linux were introduced and made life much easier.  Live Guest Relocation
wasn't needed then, because not many shops were running huge amounts of
guests.  That pain came along later.  Even then, it wasn't for the same
reason that the x86 world wanted it.

Mainframe shops running Linux on z/VM didn't worry much about hardware
failures and migrating workload to relieve overloaded servers usually
wasn't an issue because of decades of performance and capacity management.
 What "we" wanted it for was because z/VM was so reliable it could run for
years but sometimes various maintenance was important to put on the
system.  Trying to get multiple customers of the service to agree on a
maintenance window was becoming nearly impossible, because although they
wanted High Availability, they weren't willing to actually invest in it,
so the workload couldn't be failed over to another server in a cluster.

There was another factor, although not a technical one.  Many customers
have become checklist driven.  If your product doesn't allow them to put
check marks in all the boxes on the list, it's obviously not a good
product and not worthy of consideration.  So, z/VM development was getting
reports from Sales that this function was needed, just to be "in the
game."  And, being the group that they are, z/VM development wanted to
approach the development needed in a more "system of systems" oriented way
than just bolting on a feature.  Thus, Single System Image was born, and
it took quite a while and a lot of people to bring to the market.  Taking
into account the various diversions that were forced on them during the
same period of time, it's amazing they got it out as quickly as they did.

I think most people that have been in the z/VM world for a long time would
agree that having Linux available on the mainframe has breathed new life
into z/VM.  Since then, they've been working hard to introduce things that
make sense for the mainframe environment.  What new items they work on,
and what priority they have, _can_ be influenced by current and potential
customers.  I encourage anyone who has thoughts on what those new items
should be to speak up, whether here or in the IBMVM mailing list, or at
SHARE.  There are people in these mailing lists and at SHARE that have a
direct line into the z/VM and Linux development groups at IBM.  Take
advantage of that.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zLinux CPU monitoring

2015-10-01 Thread Mikael Wargh
Hmm, this sounds a good way to do it too. We're already gathering some IND 
based reports but in much coarse intervals and without further reprocessing. 
Maybe this would be the easiest way to handle the CPU part of the reporting. 
Thanks Tore! :)
Looking at the big picture I still think that the z/VM <--> Linux 
programming/monitoring interfaces should be simpler and better documented. 
Maybe the upcoming KVM will change this. After all zLinux is moving more and 
more to open source world and getting good application development also on this 
platform needs more openness and community support. 

-Mikael 

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Agblad 
Tore
Sent: 1. lokakuuta 2015 11:58
To: LINUX-390@VM.MARIST.EDU
Subject: Re: zLinux CPU monitoring

Hi, yes you will need figures from z/VM. And do some work yourself as already 
mentioned.
We use IND output for all zlinux servers from z/VM every 5 minute saved into a 
file.
You get total cpu seconds used since boot.
So we read that generated file every night, and calculates difference for each 
5 minutes interval giving us promille cpu used (better than percent, we can 
stick to integer).
We put that (and memory usage as well by the way, active that is, Proj. WSET) 
in a mysql database.
This is actually a c-pgm to get descent performance with the calculating :-) 
Every Monday we run a 'batch' generating html-pages including graphs for last 
week per server.
So we can take a look with almost zero response time.
And we can produce statistics and graphs whatever you want.


Tore Agblad
zOpen, IT Services

Volvo Group Headquarters
Corporate Process & IT
SE-405 08, Gothenburg  Sweden
E-mail: tore.agb...@volvo.com
http://www.volvo.com/volvoit/global/en-gb/ 


-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Mikael 
Wargh
Sent: den 1 oktober 2015 7:37
To: LINUX-390@VM.MARIST.EDU
Subject: Re: zLinux CPU monitoring

This was good information, many thanks Tomas! 
As we're going more open with LinuxOne, KVM and probably more to come, I really 
think that at least the basic monitoring should have several options to choose 
from. In traditional x86 Linux world the performance monitoring sector is 
flooded by different free choices. Of course the userbase is larger by several 
magnitudes, but at least the z hardware platform is more homogenous than the 
variety on distributed side. That makes me think that easily accessed and used 
monitoring stream interface shouldn't be too hard to integrate on z/VM and 
zLinuxes. Now it seems that there are one mentioned below, but by quickly 
reading the presentation, the interface don't seem to be very elegant and 
requires quite much of tuning. Anyway I'm very interested to test it. :) 

Maybe all this could be done in z/VM, but personally I'm not very fond of 
writing REXX execs there... 

-Mikael

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Pavelka, 
Tomas
Sent: 30. syyskuuta 2015 14:49
To: LINUX-390@VM.MARIST.EDU
Subject: Re: zLinux CPU monitoring

I was once researching something similar (i.e. how to get reliable CPU readings 
from within Linux) but unfortunately never finished. But I still have the 
links, maybe you will find something interesting there:

Presentation about how Linux kernel can get accurate CPU readings from the 
hypervisor:
http://linuxvm.org/present/SHARE110/S9266ms.pdf

How to set up the hypervisor file system mentioned in the presentation above:
http://www-01.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lgdd/lgdd_r_hypfs_setup.html

s390 Debug fs (I think this is where hyptop reads from):
https://www.kernel.org/doc/Documentation/s390/s390dbf.txt

Also look at /proc/[pid]/stat in proc man page:
http://linux.die.net/man/5/proc

None of these will give you readings without additional work (like computing 
percentages from cumulative readings).

HTH,
Tomas


Re: Documentation for Linux on z Systems and KVM - new

2015-10-01 Thread Scott Rohling
'The speed of progress' ?Is it not moving fast enough for you, Sergey?
  Perhaps we need 'dinosaur crossing' signs stamped on the z... ?

I am personally not worried about slowing down the world ... I have failed
even when actively trying  ;-)  I stay in shape running to catch up..

Scott Rohling

On Thu, Oct 1, 2015 at 12:40 AM, Sergey Korzhevsky 
wrote:

> Hi Mark,
>
>Thank you for detailed answer, but, actually, your vision is not
> contradicted with mine, you just confirmed that the huge jump was made in
> the last decade because of Linux on mainframe (in z/VM particularly).
> I hope that with people like you, the speed of progress won't be slow
> down.
>
>
> WBR, Sergey
>
>
>
>
> Mark Post 
> Sent by: Linux on 390 Port 
> 28-09-15 23:39
> Please respond to Linux on 390 Port
>
> To: LINUX-390@VM.MARIST.EDU
> cc:
> Subject:Re: Documentation for Linux on z Systems and KVM -
> new
>
>
> >>> On 9/28/2015 at 02:25 AM, Sergey Korzhevsky 
> wrote:
> > Alan Altmark wrote:
> >>> What is it about z that makes virtualization work better?
> >>50 years of work on it?
> >
> > That is interesting answer. One thing came to my mind is the live guest
> > relocation. As far as i could find, VMware introduced that feature
> > (vMotion) in 2003, z/VM - in 2011. The same regarding network
> > (GuestLAN/VSwitch).
> > So, looks like z/VM slept all that years and was wake up by x86 world
> > recently.
>
> Having been an active participant and observer of the community for a
> while now, I think I can contribute some perspective.  (From what I can
> tell, you have been also so I find your comment a little surprising.)
>
> When Linux for the mainframe was first introduced, a lot of facilities we
> take for granted today didn't exist.  Guest LANs, VSWITCHes, cooperative
> memory management and so on.  That started to change pretty quickly.
> Things that actually helped running more than just a few instances of
> Linux were introduced and made life much easier.  Live Guest Relocation
> wasn't needed then, because not many shops were running huge amounts of
> guests.  That pain came along later.  Even then, it wasn't for the same
> reason that the x86 world wanted it.
>
> Mainframe shops running Linux on z/VM didn't worry much about hardware
> failures and migrating workload to relieve overloaded servers usually
> wasn't an issue because of decades of performance and capacity management.
>  What "we" wanted it for was because z/VM was so reliable it could run for
> years but sometimes various maintenance was important to put on the
> system.  Trying to get multiple customers of the service to agree on a
> maintenance window was becoming nearly impossible, because although they
> wanted High Availability, they weren't willing to actually invest in it,
> so the workload couldn't be failed over to another server in a cluster.
>
> There was another factor, although not a technical one.  Many customers
> have become checklist driven.  If your product doesn't allow them to put
> check marks in all the boxes on the list, it's obviously not a good
> product and not worthy of consideration.  So, z/VM development was getting
> reports from Sales that this function was needed, just to be "in the
> game."  And, being the group that they are, z/VM development wanted to
> approach the development needed in a more "system of systems" oriented way
> than just bolting on a feature.  Thus, Single System Image was born, and
> it took quite a while and a lot of people to bring to the market.  Taking
> into account the various diversions that were forced on them during the
> same period of time, it's amazing they got it out as quickly as they did.
>
> I think most people that have been in the z/VM world for a long time would
> agree that having Linux available on the mainframe has breathed new life
> into z/VM.  Since then, they've been working hard to introduce things that
> make sense for the mainframe environment.  What new items they work on,
> and what priority they have, _can_ be influenced by current and potential
> customers.  I encourage anyone who has thoughts on what those new items
> should be to speak up, whether here or in the IBMVM mailing list, or at
> SHARE.  There are people in these mailing lists and at SHARE that have a
> direct line into the z/VM and Linux development groups at IBM.  Take
> advantage of that.
>
>
> Mark Post
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>
>
> 

Re: Documentation for Linux on z Systems and KVM - new

2015-10-01 Thread Sergey Korzhevsky
>  I stay in shape running to catch up..
Good point :)


WBR, Sergey




Scott Rohling 
Sent by: Linux on 390 Port 
01-10-15 18:05
Please respond to Linux on 390 Port

To: LINUX-390@VM.MARIST.EDU
cc:
Subject:Re: Documentation for Linux on z Systems and KVM -
new


'The speed of progress' ?Is it not moving fast enough for you, Sergey?
  Perhaps we need 'dinosaur crossing' signs stamped on the z... ?

I am personally not worried about slowing down the world ... I have failed
even when actively trying  ;-)  I stay in shape running to catch up..

Scott Rohling

On Thu, Oct 1, 2015 at 12:40 AM, Sergey Korzhevsky 
wrote:

> Hi Mark,
>
>Thank you for detailed answer, but, actually, your vision is not
> contradicted with mine, you just confirmed that the huge jump was made
in
> the last decade because of Linux on mainframe (in z/VM particularly).
> I hope that with people like you, the speed of progress won't be slow
> down.
>
>
> WBR, Sergey
>
>
>
>
> Mark Post 
> Sent by: Linux on 390 Port 
> 28-09-15 23:39
> Please respond to Linux on 390 Port
>
> To: LINUX-390@VM.MARIST.EDU
> cc:
> Subject:Re: Documentation for Linux on z Systems and KVM
-
> new
>
>
> >>> On 9/28/2015 at 02:25 AM, Sergey Korzhevsky 
> wrote:
> > Alan Altmark wrote:
> >>> What is it about z that makes virtualization work better?
> >>50 years of work on it?
> >
> > That is interesting answer. One thing came to my mind is the live
guest
> > relocation. As far as i could find, VMware introduced that feature
> > (vMotion) in 2003, z/VM - in 2011. The same regarding network
> > (GuestLAN/VSwitch).
> > So, looks like z/VM slept all that years and was wake up by x86 world
> > recently.
>
> Having been an active participant and observer of the community for a
> while now, I think I can contribute some perspective.  (From what I can
> tell, you have been also so I find your comment a little surprising.)
>
> When Linux for the mainframe was first introduced, a lot of facilities
we
> take for granted today didn't exist.  Guest LANs, VSWITCHes, cooperative
> memory management and so on.  That started to change pretty quickly.
> Things that actually helped running more than just a few instances of
> Linux were introduced and made life much easier.  Live Guest Relocation
> wasn't needed then, because not many shops were running huge amounts of
> guests.  That pain came along later.  Even then, it wasn't for the same
> reason that the x86 world wanted it.
>
> Mainframe shops running Linux on z/VM didn't worry much about hardware
> failures and migrating workload to relieve overloaded servers usually
> wasn't an issue because of decades of performance and capacity
management.
>  What "we" wanted it for was because z/VM was so reliable it could run
for
> years but sometimes various maintenance was important to put on the
> system.  Trying to get multiple customers of the service to agree on a
> maintenance window was becoming nearly impossible, because although they
> wanted High Availability, they weren't willing to actually invest in it,
> so the workload couldn't be failed over to another server in a cluster.
>
> There was another factor, although not a technical one.  Many customers
> have become checklist driven.  If your product doesn't allow them to put
> check marks in all the boxes on the list, it's obviously not a good
> product and not worthy of consideration.  So, z/VM development was
getting
> reports from Sales that this function was needed, just to be "in the
> game."  And, being the group that they are, z/VM development wanted to
> approach the development needed in a more "system of systems" oriented
way
> than just bolting on a feature.  Thus, Single System Image was born, and
> it took quite a while and a lot of people to bring to the market. Taking
> into account the various diversions that were forced on them during the
> same period of time, it's amazing they got it out as quickly as they
did.
>
> I think most people that have been in the z/VM world for a long time
would
> agree that having Linux available on the mainframe has breathed new life
> into z/VM.  Since then, they've been working hard to introduce things
that
> make sense for the mainframe environment.  What new items they work on,
> and what priority they have, _can_ be influenced by current and
potential
> customers.  I encourage anyone who has thoughts on what those new items
> should be to speak up, whether here or in the IBMVM mailing list, or at
> SHARE.  There are people in these mailing lists and at SHARE that have a
> direct line into the z/VM and Linux development groups at IBM.  Take
> advantage of that.
>
>
> Mark Post
>
> --
> For LINUX-390 subscribe / signoff / archive 

Re: SLES12 + EDEV + bug

2015-10-01 Thread Rick Troth
I hope my response helps.
There's a deepening tech debt w/r/t partitions.


On 09/30/2015 10:59 PM, Grzegorz Powiedziuk wrote:
> I think I might have found a small bug in latest update for SLES12  
> so this is just a FYI for everyone who made the same mistake I did. 
>
> If you use edevices, you know that the FBA driver  in linux automagically 
> (like Mark explained it to me few years ago ;) ) creates a device “1”  
> (dasda1 for example) on edevice. 
> You end up with for example dasda + dasda1  and you use dasda1 for OS 
> (including LVM) . No fdisks, no fdasds. 

For FBA (including EDEV and SAN), use 'fdisk'.
(And forgive me for repeating some details that you already know.)
In my experience, the partition logic sees a default partition even when
one was not explicitly created.
I found that I could get rid of this ghost partition by explicitly
running 'fdisk' and writing an empty partition table. Then "dasda1" goes
away.


> I haven’t used edevices in a while so I forgot about that and my 
> recent SLES12 install I did on “dasda” (fba) , instead of  “dasda1”. 
> I don’t know how I did that but I did it and SLES install wizard 
> didn’t complain. System installed fine and it is was working ok.

Wow! Great!
I don't recall ever having persuaded the installer to go use the whole
disk.
For FBA, it should work straight away because the partition table is
just blocks of data.
(For ECKD, it would not work unless you formatted with 'dasdfmt -d ldl'
to force consistent sized blocks across the whole disk.)


> pvscan on a build like this returned:
>  Volume Groups with the clustered attribute will be inaccessible.
>   PV /dev/dasda   VG root   lvm2 [19.53 GiB / 0free]
>
> while in proper installation it looks like this:
>
> PV /dev/dasda1   VG lnx15   lvm2 [19.53 GiB / 0free]
>   Total: 1 [19.53 GiB] / in use: 1 [19.53 GiB] / in no VG: 0 [0   ]

Somehow (either manually or via the installer) you did a 'pvcreate' on
the partition in the latter.

Both scans are legit.
LVM2 is (was??) pretty good about recognizing the PV signature and doing
the right thing, partitioned or not.


> Everything was working fine, until I did an update. 
> Something has changed in the way LVM recognizes physical  devices 
> and it totally brakes the whole system. It breaks all the parts 
> where LVM is being called which includes the little initrd 
> which is loaded by  zipl during the first stage of boot. 

I expect Mark will chime in with the official SUSE response, but sounds
like you should open a trouble ticket.

There is added pressure for the "all disks are partitioned" assumption
with the advent of UEFI.
There may also be a channel for PC-oriented assumptions via zLinux use
of GRUB. (Which is a good idea in general. Just that the GRUB developers
may need to be reminded that not all the world is a PC. And SUSE is very
good about getting those patches fed back up the chain.)
But I'm only speculating.


> I did some debugging and I found that with latest update of SLES 
> (I don’t know why because the LVM seems to be in the same version) 
> lvm doesn’t like having metadata on “dasda” (fba) anymore. 
> It likes it  only if its on dasda1 

That would be really really bad news.
But it may be just a symptom of a much smaller bug.


> When pvscan scans the disk it ends up with:
>   /dev/dasda: Skipping: Partition table signature found 
> so it does not find the label and it fails to bring online the pv and volume 
> group. 

So ... the ghost partition raises its head.

I DON'T KNOW if you can 'fdisk' and write an empty partition table on a
disk which is already stamped as a PV. Most filesystems are offset far
enough from block zero that they have no problem with that. (I have
stamped empty partition table on many EXT2/3 volumes. No damage.) Would
have to dig a little to learn where 'pvcreate' stamps the PV signature.

I take back my compliment to LVM2 for "doing the right thing"! [sigh]


> This edevice, if linked to an original SLES12 or any RHEL is working fine. 
> LVM finds a label on /dev/dasdb (fba) and I can mount it without a problem. 

If you can get a clean level-zero backup of that disk,
then I suggest you try the 'fdisk' trick on it from an alternate system.


> I didn’t find anything different in lvm.conf which could cause this. 
>
> To fix this (well it’s rather a dirty hack), I’ve downloaded a 
> sourcecode of LVM, found the instruction where it exits on message 
> "Skipping: Partition table signature found” , commented out 
> that section, compiled, installed this lvm, rebuilt initrds 
> (both of them) and it worked. I got my system running again. 

Nice work!


> So the lesson is to make sure that SLES is being installed 
> on dasda1 (fba) not dasda   (only if you have edevices - 
> with standard ECKDs  it’s probably ok to do that) 

NO
Don't settle for partitioned disks simply because "we've always done it
that way".
If you're using LVM on SAN (especially multi-path), you really *don't*
want the added 

Re: SLES12 + EDEV + bug

2015-10-01 Thread Christian Borntraeger
Am 01.10.2015 um 04:59 schrieb Grzegorz Powiedziuk:
> I think I might have found a small bug in latest update for SLES12  so this 
> is just a FYI for everyone who made the same mistake I did. 
> 
> If you use edevices, you know that the FBA driver  in linux automagically 
> (like Mark explained it to me few years ago ;) ) creates a device “1”  
> (dasda1 for example) on edevice. 
> You end up with for example dasda + dasda1  and you use dasda1 for OS 
> (including LVM) . No fdisks, no fdasds. 
> 
> I haven’t used edevices in a while so I forgot about that and my recent 
> SLES12 install I did on “dasda” (fba) , instead of  “dasda1”. I don’t know 
> how I did that but I did it and SLES install wizard didn’t complain. System 
> installed fine and it is was working ok.
> 
> pvscan on a build like this returned:
>  Volume Groups with the clustered attribute will be inaccessible.
>   PV /dev/dasda   VG root   lvm2 [19.53 GiB / 0free]
> 
> while in proper installation it looks like this:
> 
> PV /dev/dasda1   VG lnx15   lvm2 [19.53 GiB / 0free]
>   Total: 1 [19.53 GiB] / in use: 1 [19.53 GiB] / in no VG: 0 [0   ]
> 
> 
> Everything was working fine, until I did an update. Something has changed in 
> the way LVM recognizes physical  devices and it totally brakes the whole 
> system. It breaks all the parts where LVM is being called which includes the 
> little initrd which is loaded by  zipl during the first stage of boot. 
> 
> I did some debugging and I found that with latest update of SLES (I don’t 
> know why because the LVM seems to be in the same version) lvm doesn’t like 
> having metadata on “dasda” (fba) anymore. It likes it  only if its on dasda1 
> When pvscan scans the disk it ends up with:
>   /dev/dasda: Skipping: Partition table signature found 
> so it does not find the label and it fails to bring online the pv and volume 
> group. 
> 
> This edevice, if linked to an original SLES12 or any RHEL is working fine. 
> LVM finds a label on /dev/dasdb (fba) and I can mount it without a problem. 
> 
> I didn’t find anything different in lvm.conf which could cause this. 
> 
> To fix this (well it’s rather a dirty hack), I’ve downloaded a sourcecode of 
> LVM, found the instruction where it exits on message "Skipping: Partition 
> table signature found” , commented out that section, compiled, installed this 
> lvm, rebuilt initrds (both of them) and it worked. I got my system running 
> again. 
> 
> So the lesson is to make sure that SLES is being installed on dasda1 (fba) 
> not dasda   (only if you have edevices - with standard ECKDs  it’s probably 
> ok to do that) 
> 
> If you already have it on dasda  (FBA) - don’t update it without 
> preparations. 
> 
> Perhaps SLES shouldn’t allow to install a system this way at all?

Just guessing. Maybe this is a safety net to prevent the user from creating a
physical volume on an ECKD dasd without partition. For this case not going
onto a partion is indeed wrong. But doing so on an EDEV is certainly fine,
so maybe its just the check that is too unspecific.

Christian 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zLinux CPU monitoring

2015-10-01 Thread Alan Altmark
On Thursday, 10/01/2015 at 01:37 EDT, Mikael Wargh
 wrote:

> Maybe all this could be done in z/VM, but personally I'm not very fond
of
> writing REXX execs there...

You don't have to write them there, Mikael.  You just have to *run* them
there.  :-)

Alan Altmark

Senior Managing z/VM and Linux Consultant
Lab Services System z Delivery Practice
IBM Systems & Technology Group
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES12 + EDEV + bug

2015-10-01 Thread Grzegorz Powiedziuk
Thank you Rick for your input.
Here are some more experiments

>
> For FBA (including EDEV and SAN), use 'fdisk'.
> (And forgive me for repeating some details that you already know.)
> In my experience, the partition logic sees a default partition even when
> one was not explicitly created.
> I found that I could get rid of this ghost partition by explicitly
> running 'fdisk' and writing an empty partition table. Then "dasda1" goes
> away.
>
>
>
I did an experiment and created new small EDEV devices and linked them to
two systems - original SLES12 (from DVD) and updated version of SLES12
In both cases running fdisk and saving the partition table did not get rid
of “dasdb1” device.
It’s as you said a ghost partition defined in fly by a dasd/fba driver.
Nothing is being written to a disk.
But creating a couple of dummy partitions and removing them afterwards does
leave some stuff on the disk (I've dumped it with dd and looked closely)
and it does get rid of that ghost partition! But a device is still not
usable by LVM in updated SLES12. It still claims that there is a partition
table signature found. The original SLES12 doesn't care! Lvm will be happy
to create labels on it.

At some point I created a label on dasdb1  and then on dasdb. Than I've
dumped first blocks of the dasdb and looked for ascii. I found both of the
labels, just 2 sectors from each other

So it seems like something has definitely changed and it is not possible to
use and edev without this ghost partition while it was possible in DVD
 (ver 12.0.20141010) version of SLES12.
Which may lead to big problems if someone who uses edev like this will do
an update.

Gregory



>
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zLinux CPU monitoring

2015-10-01 Thread Agblad Tore
Hi, yes you will need figures from z/VM. And do some work yourself as already 
mentioned.
We use IND output for all zlinux servers from z/VM every 5 minute saved into a 
file.
You get total cpu seconds used since boot.
So we read that generated file every night, and calculates difference for each 
5 minutes interval giving us promille cpu used (better than percent, we can 
stick to integer).
We put that (and memory usage as well by the way, active that is, Proj. WSET) 
in a mysql database.
This is actually a c-pgm to get descent performance with the calculating :-)
Every Monday we run a 'batch' generating html-pages including graphs for last 
week per server.
So we can take a look with almost zero response time.
And we can produce statistics and graphs whatever you want.

 
Tore Agblad 
zOpen, IT Services

Volvo Group Headquarters
Corporate Process & IT
SE-405 08, Gothenburg  Sweden 
E-mail: tore.agb...@volvo.com 
http://www.volvo.com/volvoit/global/en-gb/ 


-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Mikael 
Wargh
Sent: den 1 oktober 2015 7:37
To: LINUX-390@VM.MARIST.EDU
Subject: Re: zLinux CPU monitoring

This was good information, many thanks Tomas! 
As we're going more open with LinuxOne, KVM and probably more to come, I really 
think that at least the basic monitoring should have several options to choose 
from. In traditional x86 Linux world the performance monitoring sector is 
flooded by different free choices. Of course the userbase is larger by several 
magnitudes, but at least the z hardware platform is more homogenous than the 
variety on distributed side. That makes me think that easily accessed and used 
monitoring stream interface shouldn't be too hard to integrate on z/VM and 
zLinuxes. Now it seems that there are one mentioned below, but by quickly 
reading the presentation, the interface don't seem to be very elegant and 
requires quite much of tuning. Anyway I'm very interested to test it. :) 

Maybe all this could be done in z/VM, but personally I'm not very fond of 
writing REXX execs there... 

-Mikael

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Pavelka, 
Tomas
Sent: 30. syyskuuta 2015 14:49
To: LINUX-390@VM.MARIST.EDU
Subject: Re: zLinux CPU monitoring

I was once researching something similar (i.e. how to get reliable CPU readings 
from within Linux) but unfortunately never finished. But I still have the 
links, maybe you will find something interesting there:

Presentation about how Linux kernel can get accurate CPU readings from the 
hypervisor:
http://linuxvm.org/present/SHARE110/S9266ms.pdf

How to set up the hypervisor file system mentioned in the presentation above:
http://www-01.ibm.com/support/knowledgecenter/linuxonibm/com.ibm.linux.z.lgdd/lgdd_r_hypfs_setup.html

s390 Debug fs (I think this is where hyptop reads from):
https://www.kernel.org/doc/Documentation/s390/s390dbf.txt

Also look at /proc/[pid]/stat in proc man page:
http://linux.die.net/man/5/proc

None of these will give you readings without additional work (like computing 
percentages from cumulative readings).

HTH,
Tomas