IPL Redhat, Maintenance mode, HMC

2023-08-21 Thread Grzegorz Powiedziuk
My linux in LPAR has an issue. Probably a missing device in fstab.
Should be an easy fix but I can't go into maintenance mode anymore. I
am pretty sure that in the past, using the "operating system messages"
in HMC I would just provide a password and it would log me in. But
this time I am in the loop. Probably something to do with "terminal
does not support clear".
Any idea why this is happening (why I can't get into the maintenance).
The integrated ASCII terminal is not connecting at all at this stage.

You are in emergency mode. After logging in, type "journalctl -xb" to view
system logs, "systemctl reboot" to reboot, "systemctl default" or "exit"
to boot into default mode.
Give root password for maintenance
(or press Control-D to continue):

open terminal failed: terminal does not support clear
Reloading system manager configuration
Starting default target
You are in emergency mode. After logging in, type "journalctl -xb" to view
system logs, "systemctl reboot" to reboot, "systemctl default" or "exit"
to boot into default mode.
Thanks!
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Openshift SNO on Z (KVM)

2022-10-17 Thread Grzegorz Powiedziuk
On Mon, Oct 17, 2022 at 3:56 AM Christian Borntraeger <
borntrae...@linux.ibm.com> wrote:

> Am 17.10.22 um 05:24 schrieb Grzegorz Powiedziuk:
> > Hello,
> > has anyone tried installing openshift SNO (single node) on Z?
> > I was able to install it on x86 using the official procedure from redhat.
> > But going through the same process with KVM on Z doesn't work well so
> far.
> >
> > I know it is not supported etc. This is just for testing/learning and
> fun.
>
> Just reached out to our OCP team and in fact they have not touched SNO on
> the
> IBM zSystems platform so far. Smallest OCP entity they have tested and
> therefore
> support is 3-Node-Cluster.
> Is there any usecase you follow except for fun and learning? A compelling
> case
> will help to get some focus on SNO.
>
>
Thank you Christian. There is a potential use case  - training/classes.
Having an SNO  for every student in class instead of a full 5 nodes cluster
(or even 3)  is much more feasible.
And in general, I would think there are clients out there who don't have
enough memory and IFLs at the moment on their mainframes to spin up an OCP
cluster with 5 nodes so giving them SNO would give them at least a chance
to try it out.  8vCPUs/32G/120G is much more appealing for a small POC :)
Besides that, installing SNO, probably would be much easier and faster than
installing a full cluster so less terrifying for new users who just want to
see if it works with their apps.

x86-ers have SNO. It would be nice to have it too :)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Openshift SNO on Z (KVM)

2022-10-16 Thread Grzegorz Powiedziuk
Hello,
has anyone tried installing openshift SNO (single node) on Z?
I was able to install it on x86 using the official procedure from redhat.
But going through the same process with KVM on Z doesn't work well so far.

I know it is not supported etc. This is just for testing/learning and fun.

I am asking for any tips and clues so I don't go too deep into the rabbit
hole
thanks!
Greg

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Recovery assistance.

2021-07-26 Thread Grzegorz Powiedziuk
It's been a while but as far as I remember in redhat (clefos should be
similar),  for single user mode (so no network) you can do a  "#cp vi vmsg"
combo.
When the zipl screen shows up you have a few seconds to type in your choice
with additional kernel parameters.
So you can do for example
#cp vi vmsg 0 1 (0 for first choice from the list and 1 for single user
mode).  If you feel that single user mode is not good for your task, then I
am sure there are other options you could use via "vi vmsg"   (BTW, the
"vi" at this point has nothing to do with linux vi program. it stands for
"virtual input" AFIK).  But sounds like this should work just fine for you.

After you get into the single user mode, sed is a good option to do
what you want

Changing IP address with sed in the ifcfg script is pretty easy. For
example
sed -i 's/10.15.15.7/10.25.25.8/' ./ifcfg-filename
without "-i" it would just print what it would do without saving
"s" is for "substitute" and then you have a string you want to change and
your destination string

Other thoughts
Can you just link that minidisk to another running zlinux, bring it online,
mount it and get whatever you want or even update that icfg script at that
point?  That's what I usually do.
It might get slightly more difficult if you have the same root VG name
everywhere but there are workarounds for tha too.

Also, sometimes when I work with a clone and I want to change the IP
address so it doesn't mess anything already live, I would  boot it without
a network interface being coupled to a virtual switch. Then I would update
the network scripts (with sed since ssh of course would not be available).
thanks
Gregory

On Thu, Jul 22, 2021 at 8:55 AM Frank M. Ramaekers
 wrote:

> I need to recover some files from a FLASHCOPY backup of my ClefOS
> instance.  I'm currently in the process of restoring these to different
> disk locations.
>
> My questions are:
>
> 1)  How can I bring this new zLinux (VM) up in maintenance mode to
> change the IP address (in /etc/sysconfig/network-scripts)
>
> 2)  What method can I use (sed?) to change these ip address since vi
> doesn't work very well on the VM console for zLinux
>
> After I do these, I should be able to bring it up normally and retrieve
> the programs/scripts/data that I need?
>
> Other thoughts?
>
> Frank Ramaekers Jr. | Systems Senior Administrator | CIS Mainframe Services
> Unisys | O-(512) 387-3949 | M-(254) 214-1820 |
> francis.ramaek...@unisys.com
>
> [unisys_logo]
>
> THIS COMMUNICATION MAY CONTAIN CONFIDENTIAL AND/OR OTHERWISE PROPRIETARY
> MATERIAL and is for use only by the intended recipient. If you received
> this in error, please contact the sender and delete the e-mail and its
> attachments from all devices.
> [Grey_LI]  [Grey_TW] <
> http://twitter.com/unisyscorp>  [Grey_GP] <
> https://plus.google.com/+UnisysCorp/posts> [Grey_YT] <
> http://www.youtube.com/theunisyschannel> [Grey_FB] <
> http://www.facebook.com/unisyscorp> [Grey_Vimeo] 
> [Grey_UB]   [Grey_Weibo] <
> https://weibo.com/unisyschina>
>
>
> --
> This message contains information which is privileged and confidential and
> is solely for the use of the intended recipient. If you are not the
> intended recipient, be aware that any review, disclosure, copying,
> distribution, or use of the contents of this message is strictly
> prohibited. If you have received this in error, please destroy it
> immediately and notify us at privacy...@globe.life.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www2.marist.edu/htbin/wlvindex?LINUX-390
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Question About LGR and Hipersocket using RHEL 7.7

2021-05-06 Thread Grzegorz Powiedziuk
On Thu, Apr 8, 2021, 12:12 PM Davis, Larry (National VM Capability) <
larry.dav...@dxc.com> wrote:

> We are seeing an Issue when Using Hipersockets connected to a DB2 system
> on z/OS
>
> When we perform a VMRELOCATE (LGR) to another member in the complex we
> lose the HS device and it goes offline
> The Relocate works fine the HS devices have the same EQID and we see the
> devices attached on the receiving system
> OSA 3A00 ATTACHED TO LXCGG01D 08A0 BY LXCGG01D
> OSA 3A01 ATTACHED TO LXCGG01D 08A1 BY LXCGG01D
> OSA 3A02 ATTACHED TO LXCGG01D 08A2 BY LXCGG01D
>
> Then we need to perform
> cio_ignore -r 0.0.08A0
> cio_ignore -r 0.0.08A0
> cio_ignore -r 0.0.08A0
>
> Then we have to restart the network devices
>
> For some reason during the LGR and/or reboot the devices are getting
> varied off and/or disabled.
>
> Am I missing something in the devices characteristics
>
> Also We are using a VSWITCH for 1G and 10G OSA's and they are reconnecting
> as expected, but HS is disappearing
>
> Larry Davis
> Senior z/VM systems Architect, Enterprise Services
> T +1.813.394.4240
> DXC Technology dxc.technology
>
>
>
How are you activating these HSI devices? Did you use the znet_conf and did
you add a ifcfg-enccw0. script ? Or you have some other way for
configuring those on boot?
I also use rhel 7.x with hipersockets and I haven't noticed a problem like
this. I used znet_conf initially to configure it live, and then added a
proper ifcfg script.
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: performance problems db2 after moving from AIX

2020-11-04 Thread Grzegorz Powiedziuk
On Tue, Nov 3, 2020 at 8:46 AM Grzegorz Powiedziuk 
wrote:

> Hi, I could use some ideas. We moved a huge db2 from old p7 aix to rhel7
> on Z and we are having big performance issues.
>

What about enabling SMT in z/VM ? Would 10 cpu db2 take an advantage of
this? On the p8 they had SMT-8 turned on
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: performance problems db2 after moving from AIX

2020-11-03 Thread Grzegorz Powiedziuk
On Tue, Nov 3, 2020 at 3:25 PM Jim Elliott  wrote:

> Gregory,
>
> Yes, thrashing. :-)
>
>
I like my name better and it even fits better :)
I will keep an eye on page faults tomorrow but we are not overcommitting
memory at all. Unless something inside of db2 is cooking but in linux there
is no swapping and in z/vm no paging whatsoever.
DB2 grabs about 95% of the memory and it all goes into "shmem" and uses
that for it's buffers and stuff. It's not like db2 has its own internal
paging? Even if it did, I am sure DBAs would scream by now.
Although the ~20% cpu time spent in kernel mode is something I've been
questioning (rest of it goes straight into user time) . But I have no clue
if it is a lot for this type of workload or not at all. I've been blaming a
massive number of filesystems/logical volumes (152) and huge number of
threads processes and FD's.
How do you best determine that there is no some other thrashing?
thanks!

Gregory


> Jim Elliott
> Senior IT Consultant - GlassHouse Systems Inc.
>
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: performance problems db2 after moving from AIX

2020-11-03 Thread Grzegorz Powiedziuk
On Tue, Nov 3, 2020 at 1:58 PM Grzegorz Powiedziuk 
wrote:

> Thanks Christian.
> There is no pagging (swapping) here besides just regular kernel's house
> keeping (vm.swappiness =5 )
> rhel 7 doesn't give me diag_stat in the debug filesystem hmm
>
> On Tue, Nov 3, 2020 at 12:37 PM Christian Borntraeger <
> borntrae...@linux.ibm.com> wrote:
>
>>
>> So you at least had some time where you paged out memory.
>> If you have sysstat installed it would be good to get some history data of
>> cpu and swap.
>>
>> You can also run "vmstat 1 -w" to get an online you on the system load.
>> Can you also check (as root)
>> /sys/kernel/debug/diag_stat
>> 2 times and see if you see excessive diagnose 9c rates.
>>
>>
In the performance monitor toolkit it shows around 12.000 diag x'9c' /s
and 50 x'44'
But at this time of a day everything is calm. i will check again tomorrow.
Lot's of diag x'9c' would indicate too many virtual cpus right?

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: performance problems db2 after moving from AIX

2020-11-03 Thread Grzegorz Powiedziuk
On Tue, Nov 3, 2020 at 1:57 PM Jim Elliott  wrote:

> Gregory,
>
> The 9117-MMD could range from 1 chip/4 cores all the way up to 16 chips/64
> cores at either 3.80 or 4.22 GHz. If it has 15 cores, then it was likely
> the 4.22 GHz 5 chip/15 core version. Using 10 out of 15 cores (even at 100%
> busy) should fit on 5 z14 ZR1 or z14 M0x IFLs. Sounds like there is
> something causing thrashing. Do you have a z/VM performance producs
> (Velocity or IBM?) as that might help isolate where the bottleneck is.
>
>
> Thanks Jim, this is encouraging. We do have a performance monitor toolkit
and I've been running sar in here as well.
When you say trashing, do you mean memory thrashing?

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: performance problems db2 after moving from AIX

2020-11-03 Thread Grzegorz Powiedziuk
On Tue, Nov 3, 2020 at 1:35 PM r.stricklin  wrote:

>
> I recently had a vaguely similar problem with a much smaller database on
> linux (x86, mysql for zabbix) that presented bizarre performance issues
> despite clearly having lots of resources left available.
>
> What our problem ended up being was the linux block i/o scheduler deciding
> to defer i/o based on seek avoidance, and under heavy database use this was
> causing havoc with mysql's ability to complete transactions. It seems
> absurd to preferentially avoid seeks when you have SSD. The problems
> vanished instantly when we changed the i/o scheduler on the SSD block
> devices to 'noop'.
>
> I think it's worth checking in your case.
>
>
OH! this is a great idea. I've completely forgot about this. We are using a
default deadline and noop could save some cpu cycles!!
thank you! I will definitely consider changing the scheduler

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: performance problems db2 after moving from AIX

2020-11-03 Thread Grzegorz Powiedziuk
Thanks Christian.
There is no pagging (swapping) here besides just regular kernel's house
keeping (vm.swappiness =5 )
rhel 7 doesn't give me diag_stat in the debug filesystem hmm

On Tue, Nov 3, 2020 at 12:37 PM Christian Borntraeger <
borntrae...@linux.ibm.com> wrote:

> On 03.11.20 14:46, Grzegorz Powiedziuk wrote:
> > Hi, I could use some ideas. We moved a huge db2 from old p7 aix to rhel7
> on
> > Z and we are having big performance issues.
> > Same memory, CPU number is down from 12 to 10.  Although they had
> > multithreading ON so they saw more "cpus" We have faster disks (moved to
> > flash), faster FCP cards and faster network adapters.
> > We are running on z114 and at this point that is practically the only VM
> > running with IFLs on this box.
> >
> > It seems that when "jobs" run on their own, they finish faster than what
> > they were getting on AIX.
> > But problems start if there is more than we can chew. So either few jobs
> > running at the same time or some reorgs running in the database.
> >
> > Load average goes to 150-200, cpus are at 100%  (kernel time can go to
> > 20-30% ) but no iowaits.
> > Plenty of memory available.
> > At this point everything becomes extremely slow, people are starting
> having
> > problems with connecting to db2 (annd sshing), basically it becomes a
> > nightmare
> >
> > This db2 is massive (30+TB) and it is a multinode configuration (17 nodes
> > running on the same host). We moved it like this 1:1 from that old AIX.
> >
> > DB2 is running on the ext4 filesystem (Actually a huge number of
> > filesystems- each NODE is a separate logical volume). Separate for logs,
> > data.
> >
> > If this continues like this, we will add 2 cpus but I have a feeling that
> > it will not make much difference.
> >
> > I know that we end up with a massive number of processes and a massive
> > number of file descriptors (lsof sice it shows also threads now, is
> > practically useless - it would run for way too long - 10-30 minutes
> > probably) .
> >
> > A snapshot from just now:
> >
> > top - 08:37:50 up 11 days, 12:04, 28 users,  load average: 188.29,
> 151.07,
> > 133.54
> > Tasks: 1843 total,  11 running, 1832 sleeping,   0 stopped,   0 zombie
> > %Cpu0  : 76.3 us, 16.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  1.0 hi,  3.2 si,
> >  2.9 st
> > %Cpu1  : 66.1 us, 31.3 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.3 si,
> >  0.6 st
> > %Cpu2  : 66.9 us, 31.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  1.3 si,
> >  0.3 st
> > %Cpu3  : 74.7 us, 23.4 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  1.3 si,
> >  0.3 st
> > %Cpu4  : 86.7 us, 10.7 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.3 si,
> >  0.6 st
> > %Cpu5  : 83.8 us, 13.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.6 si,
> >  0.3 st
> > %Cpu6  : 81.6 us, 15.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.9 si,
> >  0.6 st
> > %Cpu7  : 70.6 us, 26.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.9 si,
> >  0.6 st
> > %Cpu8  : 70.5 us, 26.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.6 si,
> >  0.6 st
> > %Cpu9  : 84.1 us, 13.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  1.3 si,
> >  0.6 st
> > KiB Mem : 15424256+total,  1069280 free, 18452168 used,
> 13472112+buff/cache
> > KiB Swap: 52305904 total, 51231216 free,  1074688 used. 17399028 avail
> Mem
>
> So you at least had some time where you paged out memory.
> If you have sysstat installed it would be good to get some history data of
> cpu and swap.
>
> You can also run "vmstat 1 -w" to get an online you on the system load.
> Can you also check (as root)
> /sys/kernel/debug/diag_stat
> 2 times and see if you see excessive diagnose 9c rates.
>
> >
> > Where  can I look for potential relief? Everyone was hoping for a better
> > performance not worse.I am hoping that there is something we can tweak to
> > make this better.
> > I will appreciate any ideas!
>
> I agree this should have gotten faster, not slower.
>
> If you have an IBM service contract (or any other vendor that provides
> support)
> you could open a service ticket to get this analysed.
>
> Christian
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www2.marist.edu/htbin/wlvindex?LINUX-390
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: performance problems db2 after moving from AIX

2020-11-03 Thread Grzegorz Powiedziuk
Hi Jim,
correction - we have z14 not z114
 .. .not sure why I keep calling our z14 z114 ;)  We have z14

We have 16 IFLs in total shared across 5 z/VM lparps but really there is
literally nothing running in there yet beside this one huge VM which has 10
IFLs configured. We have plenty of spare memory left and this one VM has
150G configured. It is about the same as what they had for this database
when it was on the AIX. This number has been calculated by DBAs and it
seems ok. I am not sure how to tell if DB2 is happy with what it has or
not, but the linux OS is definitely not starving for memory.

that p7 was 9117-MMD. And I just found that it was had EC set to 10 but it
could pull up to 15 processors. I am not sure how that works over there.



On Tue, Nov 3, 2020 at 10:58 AM Jim Elliott  wrote:

> Gregory:
>
> Do you have a z114 with 10 IFLs? That is the maximum number of IFLs
> available on a z114 (2818-M10) and would be unusual. Is this a single z/VM
> LPAR? How much memory is on the z114 (and in this LPAR)? Also, what was the
> specific MT/Model for the P7 box?
>
> If you were to compare a 12-core Power 730 (8231-E2C) to a 10 IFL z114 the
> Power system has 1.4 to 2.0 times the capacity of the z114.
>
> Jim Elliott
> Senior IT Consultant - GlassHouse Systems Inc.
>
>
> On Tue, Nov 3, 2020 at 8:47 AM Grzegorz Powiedziuk 
> wrote:
>
> > Hi, I could use some ideas. We moved a huge db2 from old p7 aix to rhel7
> on
> > Z and we are having big performance issues.
> > Same memory, CPU number is down from 12 to 10.  Although they had
> > multithreading ON so they saw more "cpus" We have faster disks (moved to
> > flash), faster FCP cards and faster network adapters.
> > We are running on z114 and at this point that is practically the only VM
> > running with IFLs on this box.
> >
> > It seems that when "jobs" run on their own, they finish faster than what
> > they were getting on AIX.
> > But problems start if there is more than we can chew. So either few jobs
> > running at the same time or some reorgs running in the database.
> >
> > Load average goes to 150-200, cpus are at 100%  (kernel time can go to
> > 20-30% ) but no iowaits.
> > Plenty of memory available.
> > At this point everything becomes extremely slow, people are starting
> having
> > problems with connecting to db2 (annd sshing), basically it becomes a
> > nightmare
> >
> > This db2 is massive (30+TB) and it is a multinode configuration (17 nodes
> > running on the same host). We moved it like this 1:1 from that old AIX.
> >
> > DB2 is running on the ext4 filesystem (Actually a huge number of
> > filesystems- each NODE is a separate logical volume). Separate for logs,
> > data.
> >
> > If this continues like this, we will add 2 cpus but I have a feeling that
> > it will not make much difference.
> >
> > I know that we end up with a massive number of processes and a massive
> > number of file descriptors (lsof sice it shows also threads now, is
> > practically useless - it would run for way too long - 10-30 minutes
> > probably) .
> >
> > A snapshot from just now:
> >
> > top - 08:37:50 up 11 days, 12:04, 28 users,  load average: 188.29,
> 151.07,
> > 133.54
> > Tasks: 1843 total,  11 running, 1832 sleeping,   0 stopped,   0 zombie
> > %Cpu0  : 76.3 us, 16.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  1.0 hi,  3.2 si,
> >  2.9 st
> > %Cpu1  : 66.1 us, 31.3 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.3 si,
> >  0.6 st
> > %Cpu2  : 66.9 us, 31.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  1.3 si,
> >  0.3 st
> > %Cpu3  : 74.7 us, 23.4 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  1.3 si,
> >  0.3 st
> > %Cpu4  : 86.7 us, 10.7 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.3 si,
> >  0.6 st
> > %Cpu5  : 83.8 us, 13.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.6 si,
> >  0.3 st
> > %Cpu6  : 81.6 us, 15.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.9 si,
> >  0.6 st
> > %Cpu7  : 70.6 us, 26.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.9 si,
> >  0.6 st
> > %Cpu8  : 70.5 us, 26.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.6 si,
> >  0.6 st
> > %Cpu9  : 84.1 us, 13.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  1.3 si,
> >  0.6 st
> > KiB Mem : 15424256+total,  1069280 free, 18452168 used,
> 13472112+buff/cache
> > KiB Swap: 52305904 total, 51231216 free,  1074688 used. 17399028 avail
> Mem
> >
> > Where  can I look for potential relief? Everyone was hoping for a better
> > performance not worse.I am hoping that there is something we can tweak to
> > make this better.
> > I will appreciate 

performance problems db2 after moving from AIX

2020-11-03 Thread Grzegorz Powiedziuk
Hi, I could use some ideas. We moved a huge db2 from old p7 aix to rhel7 on
Z and we are having big performance issues.
Same memory, CPU number is down from 12 to 10.  Although they had
multithreading ON so they saw more "cpus" We have faster disks (moved to
flash), faster FCP cards and faster network adapters.
We are running on z114 and at this point that is practically the only VM
running with IFLs on this box.

It seems that when "jobs" run on their own, they finish faster than what
they were getting on AIX.
But problems start if there is more than we can chew. So either few jobs
running at the same time or some reorgs running in the database.

Load average goes to 150-200, cpus are at 100%  (kernel time can go to
20-30% ) but no iowaits.
Plenty of memory available.
At this point everything becomes extremely slow, people are starting having
problems with connecting to db2 (annd sshing), basically it becomes a
nightmare

This db2 is massive (30+TB) and it is a multinode configuration (17 nodes
running on the same host). We moved it like this 1:1 from that old AIX.

DB2 is running on the ext4 filesystem (Actually a huge number of
filesystems- each NODE is a separate logical volume). Separate for logs,
data.

If this continues like this, we will add 2 cpus but I have a feeling that
it will not make much difference.

I know that we end up with a massive number of processes and a massive
number of file descriptors (lsof sice it shows also threads now, is
practically useless - it would run for way too long - 10-30 minutes
probably) .

A snapshot from just now:

top - 08:37:50 up 11 days, 12:04, 28 users,  load average: 188.29, 151.07,
133.54
Tasks: 1843 total,  11 running, 1832 sleeping,   0 stopped,   0 zombie
%Cpu0  : 76.3 us, 16.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  1.0 hi,  3.2 si,
 2.9 st
%Cpu1  : 66.1 us, 31.3 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.3 si,
 0.6 st
%Cpu2  : 66.9 us, 31.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  1.3 si,
 0.3 st
%Cpu3  : 74.7 us, 23.4 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  1.3 si,
 0.3 st
%Cpu4  : 86.7 us, 10.7 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.3 si,
 0.6 st
%Cpu5  : 83.8 us, 13.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.6 si,
 0.3 st
%Cpu6  : 81.6 us, 15.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.9 si,
 0.6 st
%Cpu7  : 70.6 us, 26.2 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.9 si,
 0.6 st
%Cpu8  : 70.5 us, 26.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.6 hi,  1.6 si,
 0.6 st
%Cpu9  : 84.1 us, 13.6 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.3 hi,  1.3 si,
 0.6 st
KiB Mem : 15424256+total,  1069280 free, 18452168 used, 13472112+buff/cache
KiB Swap: 52305904 total, 51231216 free,  1074688 used. 17399028 avail Mem

Where  can I look for potential relief? Everyone was hoping for a better
performance not worse.I am hoping that there is something we can tweak to
make this better.
I will appreciate any ideas!
thanks
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: adding cpus without logoff

2020-09-08 Thread Grzegorz Powiedziuk
On Tue, Sep 8, 2020 at 10:13 AM Mark Post  wrote:

>
> Gregory,
>
> In short, no. When a z/VM guest is logged on, it is created using what
> is in the currently active CP directory. Any and all limitations, such
> as the maximum amount of virtual storage, number of virtual CPUs, are
> encoded in the control blocks for that guest. Any attempt to exceed
> those limits will be denied. The only way to get the new, "larger,"
> limits will be to log the guest off and back on again.
>
> There are likely tools out there that run from a privileged account that
> will go into the control blocks for the target guest and change things.
> As you might expect, that sort of thing can be dangerous. Still, if
> getting this done is important enough, you could ask on the IBMVM
> mailing list to see if such a thing is out there, and where to get it.
>
>
> Thank Mark for confirmation. For now I will stick to the "safe" way. Well
reboot it is.
Fortunately I might have an opportunity to do it today.
I am planning to do "Machine esa 16"  (which is the total number of cpus we
have in the box)
and then just put CPU statements 00-07
>From what I just tested this should give me initial 8 cpus and, and define
cpu will work up to 16
thank you
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: adding cpus without logoff

2020-09-07 Thread Grzegorz Powiedziuk
On Mon, Sep 7, 2020 at 3:48 AM Herald ten Dam 
wrote:

>
> you can dynamically add cpus. You changed only the definition. Have a look
> at this note form IBM:
>
> https://www.ibm.com/support/pages/dynamically-adding-or-removing-cpus-linux-zvm-guest
>
> thank you. This is actually what I was trying to do. I've updated the
definition and then from the guest VM (via vmcp, from linux) I run the
define CPU 08 command which gave me an error about reaching the max number
of cpus defined in the directory.  Even though the number right now is "12"
it doesn't let me go with it. I am 100% sure that logoff/logon will fix
that but I can't shutdown that vm at the moment
 Thanks
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


adding cpus without logoff

2020-09-06 Thread Grzegorz Powiedziuk
 Hello, I am almost sure I've done this before but now my memory is failing
me
The virtual machine had a  "MACHINE ESA 8" and a list CPU 00...07
We changed it to MACHINE ESA 10 and added 2 more cpus to the list

Unfortunately I can't simply re-logon the VM. I was sure that after
updating the userdirectory it will allow me to define cpu from the linux
guest but it doesn't

I am getting HCPCPU1458E, attempt was made to define more CPUs than it is
allowed in directory

Any way to workaround this? My first thought was running set machine esa 10
but the command doesn't take the mcpu number like the directory statement
...
thank you
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: db2 and systemd?

2020-08-18 Thread Grzegorz Powiedziuk
On Tue, Aug 18, 2020 at 8:53 PM marcy cortes 
wrote:

> It's not fun...
>
> We have a .service file of type forking that calls a script of ours to
> start it and stop it.
> systemd is cool with that.
> However, if you stop and then start db2 outside of db2 it's no longer
> associated with that service and systemd whacks its processes at shutdown
> time, resulting generally in a database that needs repair :(   (MQ qmgrs
> have the same issue).
> So one must remember to always use "systemctl stop/start" instead.
> That requires root authority.
> DBA's don't get that.
> We have something in place for them to use a script authorized in our
> security system.  Think sudo on steroids.
> Now what if they forget?   Well, we also had to make a monitoring process
> that detects if db2 is not started under systemd and nags them to stop it
> and start it the correct way.
>
> Hope that helps.
>

Thank you, very much. Yes, it does not look fun. So in your service file
you have execstart and execstop pointing to a wrapper which after running
db2 commands just exits? Do you specify the pidfile and does the wrapper
script update pidfile? Without it, systemd probably is not able to figure
out if service is running or not and probably would make stopping the
service  with systemctl command difficult.
I am surprised that there is no obvious and elegant way to do it. All the
documentation explains automatic startup but so far I found nothing about
automatic but graceful stopping ... which probably is even more important
than starting.

But your email helped me to think about something else ...  I might create
a "special" service( reboot-guard.service - even redhat propouses something
like that in one of their articles) which disables/enables
reboot/shutdown/poweroff commands depending if it is enabled or not which
basically would keep anyone away from rebooting it when db2 is running.
Then I would create a simple stop_and_reboot , stop_and_shutdown scripts
which would stop the database, disable the reboot-guard.service (which is
enabled back again by another service on boot) , and then reboot the server
...
I thought I was done with cooking all of these inhouse solutions which
noone will understand in 5 years from now ...

Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


db2 and systemd?

2020-08-18 Thread Grzegorz Powiedziuk
Hello, I realize that this is not a typical s390 question but I hope that's
ok if I ask anyway. There is a high chance we have people here who run db2
on zLinux.
How do you solve auto start and more importantly auto stop of DB2 on
start/reboot/shutdowns ?
Seems like the autostart may be handled by a db2fmcd service and db2iauto
but what about autostops when reboot/shutdown is issued.
I am afraid that db2 will be simply killed, am I right? I am not even sure
if having my own stop script would do the trick as systemd might just kill
the db2 processes before my script stops it gracefully
thank you
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: vmrelocate and quiescence time

2020-08-18 Thread Grzegorz Powiedziuk
On Tue, Aug 18, 2020 at 9:01 AM Bill Bitner  wrote:

>
> z/VM doesn't have a short cut to determine that no pages have changed. So
> for a guest over 100GB, it has to examine over 26 million things multiple
> times. Validating the I/O is drained is another aspect, but my guess is the
> traversing of the DAT structures is the biggest factor.
>
>
Ok this makes sense, thank you for your input.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: vmrelocate and quiescence time

2020-08-17 Thread Grzegorz Powiedziuk
On Mon, Aug 17, 2020 at 12:26 PM Bill Bitner  wrote:

> I take a few days off and a fun topic comes up that I mostly miss. :-)
>
> We did a Live Virtual Class on this back when SSI first came out. A few
> things have changed, but you may find value in it:
> http://www.vm.ibm.com/education/lvc/zvmlvc.html
> The charts for the presentation are
> http://www.vm.ibm.com/education/lvc/LVC0227.pdf
>
>
This is great! Thank you. Although this brings a few more questions, it was
definitely very helpful.
- chart on page 24 idle case (0GB change) had a quiesce time 8s - I wonder
why. That VM in this test was probably >100GB so more or less close to my
case. But I thought that if there are close to 0 changes, then the last
pass should be just a "formality"? I was relocating my VM with DB2 being
stopped! There were just few idle processes left and yet it took 20-30s of
freeze time. Of course I am doing sync and not immediate.
Seems like I should be below <10s even with just 2CTCs (although I am
hoping to get 2 more) by looking at these charts.
- document says that I/O device count matters as well. We have 4 FCP
channels with quite a big number of luns (>20), would that make a big
difference?
I will look into anything related in perfsvm
thanks again.
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: vmrelocate and quiescence time

2020-08-17 Thread Grzegorz Powiedziuk
On Mon, Aug 17, 2020 at 8:52 AM Alan Altmark 
wrote:

>
> You are equating things that should not be equated.  If you have set up 4
> CTCs between each pair of members as recommended in the books, you have
> done all you can to make it go faster, given the CPUs that you have.  (I
> would probably look at the diagnostics on the HMC and see if you're
> getting any frame errors on the CTCs.)
>

Ok, this might be it. We have only 2 CTC per each!  I've reached out to our
hardware team to see if we can increase the number of CTC.  We might
"steal" some from the non prod.
Somehow I thought that these are just defined in the iodf nowadays and
don't use real adapters and cables if we are not leaving the box.

In the meantime I am trying to do some simple math and approximation ..more
like guessing for fun.  I wonder how much data is there to push during the
last pass in the LGR process on average. 10% of total VM's memory at most?
So if one CTC has 10Gbps (I have no clue but that number seems reasonable)
then 10% of 128GB  should take just ~10 seconds using just one CTC. I am
sure I am off but trying to find a simple explanation of my 20-30s.
I even tested it with DB2 completely stopped. No noticeable improvement.
I wonder if that has something to do with the fact that 90% of my linux
memory is used by file cache at the moment (of course this will change when
it goes to prod)
But as I mentioned, the database was stopped so even file cache should be
pretty much "stable", unless there is something special about memory used
by cache and vmrelocate always moves it in the last pass?
I will try to drop cache the next time I try this.
Thank you very much for your help!
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: vmrelocate and quiescence time

2020-08-16 Thread Grzegorz Powiedziuk
On Sun, Aug 16, 2020 at 1:16 AM Alan Altmark 
wrote:

> On Saturday, 08/15/2020 at 02:25 GMT, Grzegorz Powiedziuk
>  wrote:
> > Thank you for verification! If timestamps are correct then this step
> > literally takes a very brief moment. So I suspect that that final memory
> > pass takes so much time, or am I wrong?
> > So how much usually does it take to vmrelocate a 100-200G vm?  Just an
> > estimate ... do I have to worry with my 30 seconds?
>
> As you've noted "it depends".  A 30 second relocation doesn't bother me,
> but a 30 second quiesce time does.
>
> For Very Large Guests, it's typically better to use application clusters
> so that you don't need to move the guests.  Just take one down and let the
> other members take the load.  E.g. I wouldn't use LGR on an Oracle db,
> given it's failover capabilities.  Let the standby Oracle instance take
> over.
>
>
What is a very large guest nowadays?  Is 128G considered to be very large?
Would 30s quiesce time be expected for a 128G VM or rather for a 512G
guest.
I am trying to figure out if I have a problem I need to solve or we just
have to accept it because that's normal.
thanks!

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: vmrelocate and quiescence time

2020-08-16 Thread Grzegorz Powiedziuk
On Sun, Aug 16, 2020 at 1:10 AM Alan Altmark 
wrote:

>
> We chose 10 seconds as the default because longer than that tends to cause
> applications to get upset, as you have discovered.  You said you were
> using virtual CTCs, so that means you're in the same LPAR, not just same
> CPC.
>

Apologize, I confused real vs virtual. I thought that real refers to actual
cables hanging across mainframes etc.
So this is a regular 1st level z/VMs in LPARs setup with linux guests as
VMS. CTC not virtualized in z/VM but defined I believe in HCDS (these tasks
are being done by the hardware team so I don't know the details)
thanks
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: vmrelocate and quiescence time

2020-08-15 Thread Grzegorz Powiedziuk
On Fri, Aug 14, 2020 at 4:45 PM Scott Rohling 
wrote:

> One key question is whether lpars are on same cec or different ones...
> virtual ctcs or "real"?
>
> Scott Rohling
>

They are on the same CEC. Virtual ctcs

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: vmrelocate and quiescence time

2020-08-15 Thread Grzegorz Powiedziuk
On Sat, Aug 15, 2020 at 5:00 AM Alan Altmark 
wrote:

>
> Are you using the IMMEDIATE option on VMRELOCATE?  I ask because the
> default MAXQUIESCE on the VMRELOCATE without IMMEDIATE is 10 seconds. With
> IMMEDIATE.
>
> Forgot to mention - I have to specify longer MAXQUIESCE because the
default 10s was too short. And no, I am not doing IMMEDIATE.
Although I have to force storage because we have a "MAX" storage parameter
of the VM set much higher than we need right now (in case we need to add
more dynamically).
And the "MAX" is higher than total paging space on either of the systems.
We have plenty of memory available on both ends.



> Yes, that's how it works.  It forces OSA and FCP connections to be rebuilt
> with the correct parameters.
>

Thank you for verification! If timestamps are correct then this step
literally takes a very brief moment. So I suspect that that final memory
pass takes so much time, or am I wrong?
So how much usually does it take to vmrelocate a 100-200G vm?  Just an
estimate ... do I have to worry with my 30 seconds?
Thanks Alan

Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


vmrelocate and quiescence time

2020-08-14 Thread Grzegorz Powiedziuk
Hello,
>From your experience, during the relocation from one LPAR to another, how
long on average the quiescence period is (no network, no nothing)?
I understand that it depends on many factors, but I am just asking for some
examples.
>From my understanding the quiescence is one of the last steps, right before
the final pass of memory which should be as small as possible. I understand
that if there is a lot happening in the VM then the last pass can also be
quite big. But still ...

Anyway, I have a rhel VM with 128GB of memory running DB2. Currently not
prod so CPUs are mostly idle. No paging, no swapping, not much traffic and
the quiescence takes  20-30 seconds which seems a lot (the vmrelocate takes
a couple of minutes in total) . It is causing db2 and ssh sessions to
timeout.
I am wondering if that is normal or we have something misconfigured.

During the last phase the linux kernel sends error messages "QDIO problem
occurred" for each FCP and some QETH errors  and they are all followed with
recoveries. All these are being thrown at the same time right after
quiescence ends (I think). Which kind of makes sense to me.

thanks!
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: SAP Sysbase ASE driver for db2 on zLinux

2020-08-10 Thread Grzegorz Powiedziuk
On Wed, Aug 5, 2020 at 2:17 AM Timothy Sipples  wrote:

> 
> So have you tried a JDBC driver for Sybase ASE, such as jConnect (filename
> probably jconn4.jar), jTDS, or possibly Progress Software's? For example,
> jConnect is included with SAP's SDK for ASE, and the driver itself is a
> single file that should be named jconn4.jar.
>
> Thank you Tim, I appreciate your time. I passed along your message to our
db2 and application teams hoping that it will help them to figure it out.
Thanks!
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


SAP Sysbase ASE driver for db2 on zLinux

2020-08-04 Thread Grzegorz Powiedziuk
Hello,
Does anyone have any experience with Sysbase ASE and zLinux?
We run Sysbase database on some other architecture but we have a db2
instance on zLinux which needs to be able to communicate with Sysbase via
this driver. This driver is not available in db2 federation.
I apologize if I am mixing terms but hopefully for someone who knows this
stuff it will be clear what I am asking about.
There is a sysbase client (which I was told includes the driver a db2
needs) on their website available for download, but there is no s390
version in there. There is AIX, pLinux, regular linux ... but no Z. I am
hoping that there is a workaround or some open source driver which might
replace this.
thanks
Best regards
Gregory P

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: order of eth0/eth1 and udev rules vs kernel discovery

2019-01-15 Thread Grzegorz Powiedziuk
> But if I want device 2000 to become eth0 and it happened to be eth0 then
> it will fail because 3000 is using already eth0 name ...
>
> Sorry, I had a typo which turned above into a complete nonsense
I meant of course that it it happened to be eth1 after kernel discovery and
I want it to be renamed to eth0 it will fail (because eth0 is used by 3000)
thanks
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


order of eth0/eth1 and udev rules vs kernel discovery

2019-01-15 Thread Grzegorz Powiedziuk
Hello everyone!
I am trying to find out if there is a way to force which specific qeth
virtual nic becomes eth0 and which eth1 in a reliable way. I am having this
problem with redhat 7.5 but I believe this is a problem with all systemd
distributions. I don't remember having these problems in the past ...
Anyway from my understanding:
1. First, kernel discovers NIC cards and assigns  names eth0..1..2 in order
they show up which is pretty much random
2. NIC cards will be renamed depending on what udev rules you have and
depending on what you have in your ifcfg-ethX scripts.

Renaming is easy if I am renaming these to a different namespace - for
example
- device 2000 (which on boot might be either eth0 or eth1) to  en2000
- device 3000 (which on boot might be either eth1 or eth0) to  en3000

But if I want device 2000 to become eth0 and it happened to be eth0 then it
will fail because 3000 is using already eth0 name ...

I know, that's why we should not be using ethX namespace anymore but how
VMware is getting around that? VMware guys claims that is is always
consistent and first NIC becomes eth0 and so on. How? Why can't we have the
same thing here? Or it will happen over there sooner or later too?
Why udev can't just rename them twice. First to a temporary name and then
second time to ones user wants :(

Best regards
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Buffer I/O error on device dm-28

2018-01-29 Thread Grzegorz Powiedziuk
2018-01-29 13:47 GMT-05:00 Mark Post :

> >>> On 1/28/2018 at 09:51 PM, Csaba Polgar 
> wrote:
> > Hello,
> >
> > I'm newbie on this forum, but I would like to ask your help. (Sorry, if
> I
> > brake any protocol/usage guideline.)
> >   Could someone please help to solve the below issue?
> >
> -snip-
> >
> > How can it be online (active) before the Linux boot? Or what is missing
> > from the configuration?
>
> There's not nearly enough information to know what's going on, yet.
> First, what distribution is this?  What maintenance level?
>
> Next, is this an LVM setup, or something else?  If LVM, then some output
> from things like "vgs" "pvs" "vgdisplay -v" and so on would be helpful.
>

Probably that's an LVM with many dasds as PVs (judging by "dm-28" device
name) and some of them haven't been online-d in a permanent way.

If this is suse then as far as I remember just activate them using yast and
on exit it should save it. There also was another way. Just run
"dasd_configure 0.0.0400 1" and it  should do the trick. I don't think you
need to run mkinitrd unless this this device is a part of a root filesystem
but probably it is not otherwise you wouldn't complete the boot at all.

If this is redhat then you can add these to dasd.conf for example:

cat /etc/dasd.conf
0.0.0200
0.0.0201
0.0.0210

If this a really old redhat you might need to add these to /etc/zipl.com
for example like this(in redhat 6 and 7 you can either way) :

[root@lin00 ~]# cat /etc/zipl.conf
[defaultboot]
defaultauto
prompt=1
timeout=5
default=linux
target=/boot
[linux]
image=/boot/vmlinuz-3.10.0-123.el7.s390x
parameters="cio_ignore=all,!condev root=/dev/mapper/rhel_lin00-root
crashkernel=auto  rd.lvm.lv=rhel_lin00/root vconsole.keymap=us
vconsole.font=latarcyrheb-sun16 rd.dasd=0.0.0200 rd.dasd=0.0.0201
rd.dasd=0.0.0210 rd.dasd=0.0.0211 rd.dasd=0.0.0192 rd.lvm.lv=rhel_lin00/boot
LANG=en_US.UTF-8"
ramdisk=/boot/initramfs-3.10.0-123.el7.s390x.img


see above list of rd.dasd=  with addresses? After adding these you need to
run zipl to save it in the ipl record  (you can do zipl --dry-run -V to
verify that it does what it should)

Regards
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Gold On LUN

2017-09-07 Thread Grzegorz Powiedziuk
After it boots from new LUN (even if it mounts golden image afterwards) can
you still see both LUNs in the running system?

what is the output of "multipath -ll" command?
what is the output of "pvscan"  and "lvscan" commands?
what is the output of "mount" command?


2017-09-07 15:11 GMT-04:00 Greg Preddy <gpre...@cox.net>:

> Yes we use LVM except on /boot.  Not clear what needs to be changed,
> /etc/multipath.conf on the new LUN?
>
>
> On 9/7/2017 10:31 AM, Grzegorz Powiedziuk wrote:
>
>> Hi
>> What do you mean it still mounts a  gold LUN? You boot from from a NEW Lun
>> but root filesystem ends up beeing mounted from GOLD Lun?
>> First of I all I would make sure that GOLD lun after clonning is not
>> accesible in virtual machine anymore. Just to make it simple.
>>
>> I can't remember how it is done in SLES but in RHEL there is a bunch of
>> stuff that refers to a specific LUN with a specific scsi_id
>>
>> For example multipath (/etc/multipath.conf)  configuration. In there you
>> usually you bond scsi_id (wwid) of Lun with friendly name (mpathX for
>> example).
>> That multipath configuration is also saved in initrd. So if you boot from
>> clone, it will end up mounting wrong volume.
>>
>> Are you using LVM?
>>
>>
>>
>>
>>
>>
>> 2017-09-07 9:08 GMT-04:00 Greg Preddy <gpre...@cox.net>:
>>
>> All,
>>>
>>> We're doing SLES 12 on 100% LUN, with gold copy on a single 60GB LUN.
>>> This is a new cloning approach for us so we're not sure how to make this
>>> work.  Our Linux SA got the storage admin to replicate the LUN, but when
>>> we change the server to boot the copy, it still mounts the gold LUN.
>>> 99% sure we got the LOADDEV parms right.  Does anyone have steps to
>>> clone a LUN-only SLES 12 system?
>>>
>>> --
>>> For LINUX-390 subscribe / signoff / archive access instructions,
>>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>>> visit
>>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>>> --
>>> For more information on Linux on System z, visit
>>> http://wiki.linuxvm.org/
>>>
>>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>>
>>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Gold On LUN

2017-09-07 Thread Grzegorz Powiedziuk
Hi
What do you mean it still mounts a  gold LUN? You boot from from a NEW Lun
but root filesystem ends up beeing mounted from GOLD Lun?
First of I all I would make sure that GOLD lun after clonning is not
accesible in virtual machine anymore. Just to make it simple.

I can't remember how it is done in SLES but in RHEL there is a bunch of
stuff that refers to a specific LUN with a specific scsi_id

For example multipath (/etc/multipath.conf)  configuration. In there you
usually you bond scsi_id (wwid) of Lun with friendly name (mpathX for
example).
That multipath configuration is also saved in initrd. So if you boot from
clone, it will end up mounting wrong volume.

Are you using LVM?






2017-09-07 9:08 GMT-04:00 Greg Preddy :

> All,
>
> We're doing SLES 12 on 100% LUN, with gold copy on a single 60GB LUN.
> This is a new cloning approach for us so we're not sure how to make this
> work.  Our Linux SA got the storage admin to replicate the LUN, but when
> we change the server to boot the copy, it still mounts the gold LUN.
> 99% sure we got the LOADDEV parms right.  Does anyone have steps to
> clone a LUN-only SLES 12 system?
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: /etc/mtab and "df -h" problem

2017-04-27 Thread Grzegorz Powiedziuk
>
> Redhat support found a reason - a bug it glibc which was fixed just a
> couple of weeks ago
>
>
It has been fixed in glibc-2.12-1.209.el6_9.1
Thank you for your suggestions
regards
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: /etc/mtab and "df -h" problem

2017-04-27 Thread Grzegorz Powiedziuk
Redhat support found a reason - a bug it glibc which was fixed just a
couple of weeks ago

2017-04-19 21:15 GMT-04:00 Grzegorz Powiedziuk <gpowiedz...@gmail.com>:

> Reboot didn't resolve the issue. I will have to open a ticket with redhat.
> hopefully they can figure this out
> Gregory
>
> 2017-04-18 23:31 GMT-04:00 Grzegorz Powiedziuk <gpowiedz...@gmail.com>:
>
>> 2017-04-18 16:34 GMT-04:00 Alan Altmark <alan_altm...@us.ibm.com>:
>>
>>> On Monday, 04/17/2017 at 03:14 GMT, Grzegorz Powiedziuk
>>> <gpowiedz...@gmail.com> wrote:
>>> > I am having a really weird issue on one of the z redhat systems.
>>> Probably
>>> > it doesn't have anything to do with Z but we have some great minds here
>>> so
>>> > perhaps someone will help me
>>> >
>>> > the "df -h" or "df -P"  command doesn't work when run as a normal user.
>>> > "df" on it's own works fine.
>>> >
>>> > Error:
>>> > df: cannot read table of mounted file systems: Permission denied
>>> >
>>> > Now, here is when it gets really strange. Above suggests wrong
>>> permissions
>>> > on /etc/mtab which are fine:
>>> > [root@it069qz5ora lib64]# ls -la /etc/mtab
>>> > -rw-r--r-- 1 root root 2024 Apr 17 10:44 /etc/mtab
>>> >
>>> > But the "df -h" is trying to open that file with WRITE access mode !!!
>>> > strace df -h 2>&1 |grep open  | grep mtab
>>> > ...
>>> > open("/etc/mtab", O*_RDWR*|O_CLOEXEC) = -1 EACCES (Permission
>>> denied)
>>> >
>>> > On all other systems"df -h" opens that mtab file in O_RDONLY. Why this
>>> one
>>> > is different?
>>> >
>>> > And as I said, regular "df" works fine and it usies O_RDONLY flag.
>>> >
>>> > So the problem happens only with "df -h" or "df -P" . That doesn't make
>>> any
>>> > sense to me.
>>> >
>>> > Has anyone seen anything like that?
>>>
>>> Weird.  The read_file_system_list() routine in mountlist.c (called by
>>> df.c) code shows a hard-coded "r" option in the fopen(), and I don't see
>>> df calling some other "mount list" function because of -h or -P.
>>>
>>> Alan Altmark
>>>
>>
>>
>> I know .. doesn't make sense. Thanks for looking into source code. I
>> wonder if something has change in the code in the
>> newer versions of this package. Tomorrow in the evening I will reboot
>> this guy. Hopefully this will fix it.
>>
>> Aha, forgot to say. I've changed permissions of mtab file to 666 for a
>> moment just to make sure that this is the problem and of course it worked
>> this time.
>>
>> Gregory
>>
>>
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: /etc/mtab and "df -h" problem

2017-04-19 Thread Grzegorz Powiedziuk
Reboot didn't resolve the issue. I will have to open a ticket with redhat.
hopefully they can figure this out
Gregory

2017-04-18 23:31 GMT-04:00 Grzegorz Powiedziuk <gpowiedz...@gmail.com>:

> 2017-04-18 16:34 GMT-04:00 Alan Altmark <alan_altm...@us.ibm.com>:
>
>> On Monday, 04/17/2017 at 03:14 GMT, Grzegorz Powiedziuk
>> <gpowiedz...@gmail.com> wrote:
>> > I am having a really weird issue on one of the z redhat systems.
>> Probably
>> > it doesn't have anything to do with Z but we have some great minds here
>> so
>> > perhaps someone will help me
>> >
>> > the "df -h" or "df -P"  command doesn't work when run as a normal user.
>> > "df" on it's own works fine.
>> >
>> > Error:
>> > df: cannot read table of mounted file systems: Permission denied
>> >
>> > Now, here is when it gets really strange. Above suggests wrong
>> permissions
>> > on /etc/mtab which are fine:
>> > [root@it069qz5ora lib64]# ls -la /etc/mtab
>> > -rw-r--r-- 1 root root 2024 Apr 17 10:44 /etc/mtab
>> >
>> > But the "df -h" is trying to open that file with WRITE access mode !!!
>> > strace df -h 2>&1 |grep open  | grep mtab
>> > ...
>> > open("/etc/mtab", O*_RDWR*|O_CLOEXEC) = -1 EACCES (Permission
>> denied)
>> >
>> > On all other systems"df -h" opens that mtab file in O_RDONLY. Why this
>> one
>> > is different?
>> >
>> > And as I said, regular "df" works fine and it usies O_RDONLY flag.
>> >
>> > So the problem happens only with "df -h" or "df -P" . That doesn't make
>> any
>> > sense to me.
>> >
>> > Has anyone seen anything like that?
>>
>> Weird.  The read_file_system_list() routine in mountlist.c (called by
>> df.c) code shows a hard-coded "r" option in the fopen(), and I don't see
>> df calling some other "mount list" function because of -h or -P.
>>
>> Alan Altmark
>>
>
>
> I know .. doesn't make sense. Thanks for looking into source code. I
> wonder if something has change in the code in the
> newer versions of this package. Tomorrow in the evening I will reboot this
> guy. Hopefully this will fix it.
>
> Aha, forgot to say. I've changed permissions of mtab file to 666 for a
> moment just to make sure that this is the problem and of course it worked
> this time.
>
> Gregory
>
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: /etc/mtab and "df -h" problem

2017-04-18 Thread Grzegorz Powiedziuk
2017-04-18 16:34 GMT-04:00 Alan Altmark <alan_altm...@us.ibm.com>:

> On Monday, 04/17/2017 at 03:14 GMT, Grzegorz Powiedziuk
> <gpowiedz...@gmail.com> wrote:
> > I am having a really weird issue on one of the z redhat systems.
> Probably
> > it doesn't have anything to do with Z but we have some great minds here
> so
> > perhaps someone will help me
> >
> > the "df -h" or "df -P"  command doesn't work when run as a normal user.
> > "df" on it's own works fine.
> >
> > Error:
> > df: cannot read table of mounted file systems: Permission denied
> >
> > Now, here is when it gets really strange. Above suggests wrong
> permissions
> > on /etc/mtab which are fine:
> > [root@it069qz5ora lib64]# ls -la /etc/mtab
> > -rw-r--r-- 1 root root 2024 Apr 17 10:44 /etc/mtab
> >
> > But the "df -h" is trying to open that file with WRITE access mode !!!
> > strace df -h 2>&1 |grep open  | grep mtab
> > ...
> > open("/etc/mtab", O*_RDWR*|O_CLOEXEC) = -1 EACCES (Permission
> denied)
> >
> > On all other systems"df -h" opens that mtab file in O_RDONLY. Why this
> one
> > is different?
> >
> > And as I said, regular "df" works fine and it usies O_RDONLY flag.
> >
> > So the problem happens only with "df -h" or "df -P" . That doesn't make
> any
> > sense to me.
> >
> > Has anyone seen anything like that?
>
> Weird.  The read_file_system_list() routine in mountlist.c (called by
> df.c) code shows a hard-coded "r" option in the fopen(), and I don't see
> df calling some other "mount list" function because of -h or -P.
>
> Alan Altmark
>


I know .. doesn't make sense. Thanks for looking into source code. I wonder
if something has change in the code in the
newer versions of this package. Tomorrow in the evening I will reboot this
guy. Hopefully this will fix it.

Aha, forgot to say. I've changed permissions of mtab file to 666 for a
moment just to make sure that this is the problem and of course it worked
this time.

Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: /etc/mtab and "df -h" problem

2017-04-18 Thread Grzegorz Powiedziuk
2017-04-18 17:13 GMT-04:00 Scott Rohling :

> You mention nfs - so -   does 'df -hl'  also present the error? The 'l'
> limits it to local filesystems so seems an easy check to see if it's
> related to nfs..
>
> Scott Rohling
>
>
Good idea but unfortunately same thing happens. "df" works but "df -hl"
complains about permissions. Strace shows that it wants to open mtab in
O_RDWR
I have scheduled reboot of this server for tomorrow. I will let you know if
that fixes this problem ...

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: /etc/mtab and "df -h" problem

2017-04-17 Thread Grzegorz Powiedziuk
Thanks Scott
Nothing unusual ext4  + nfs
But, I've just noticed that system was updated from redhat satelite server
few days ago. And it definitely requires a reboot (satellite claims so)
Nonetheless other updated systems (there were a bunch of those), does not
have this issue  .

Gregory

2017-04-17 12:04 GMT-04:00 Scott Rohling <scott.rohl...@gmail.com>:

> A quick google shows similar reported (on ubuntu but just grasping here)
>  -- had to do with fuse mounted filesystems and gvfs (and perhaps lightdm)
> ...
>
> What kind of filesystems do you have available ​and could one of them be
> the reason things behave differently.. ?   Is there  a filesystem here that
> isn't present on your other servers..  ?
>
> Why the -h option would matter I'm really not sure - to me that's just an
> adjustment of how sizes are shown.. so you'd think you'd get the error
> regardless..   Like I said - grasping..
>
> Scott Rohling
>
> On Mon, Apr 17, 2017 at 8:12 AM, Grzegorz Powiedziuk <
> gpowiedz...@gmail.com>
> wrote:
>
> > I am having a really weird issue on one of the z redhat systems. Probably
> > it doesn't have anything to do with Z but we have some great minds here
> so
> > perhaps someone will help me
> >
> > the "df -h" or "df -P"  command doesn't work when run as a normal user.
> > "df" on it's own works fine.
> >
> > Error:
> > df: cannot read table of mounted file systems: Permission denied
> >
> > Now, here is when it gets really strange. Above suggests wrong
> permissions
> > on /etc/mtab which are fine:
> > [root@it069qz5ora lib64]# ls -la /etc/mtab
> > -rw-r--r-- 1 root root 2024 Apr 17 10:44 /etc/mtab
> >
> > But the "df -h" is trying to open that file with WRITE access mode !!!
> > strace df -h 2>&1 |grep open  | grep mtab
> > ...
> > open("/etc/mtab", O*_RDWR*|O_CLOEXEC) = -1 EACCES (Permission denied)
> >
> > On all other systems"df -h" opens that mtab file in O_RDONLY. Why this
> one
> > is different?
> >
> > And as I said, regular "df" works fine and it usies O_RDONLY flag.
> >
> > So the problem happens only with "df -h" or "df -P" . That doesn't make
> any
> > sense to me.
> >
> > Has anyone seen anything like that?
> >
> > thanks!
> > Gregory
> >
> > --
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> > visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > --
> > For more information on Linux on System z, visit
> > http://wiki.linuxvm.org/
> >
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


/etc/mtab and "df -h" problem

2017-04-17 Thread Grzegorz Powiedziuk
I am having a really weird issue on one of the z redhat systems. Probably
it doesn't have anything to do with Z but we have some great minds here so
perhaps someone will help me

the "df -h" or "df -P"  command doesn't work when run as a normal user.
"df" on it's own works fine.

Error:
df: cannot read table of mounted file systems: Permission denied

Now, here is when it gets really strange. Above suggests wrong permissions
on /etc/mtab which are fine:
[root@it069qz5ora lib64]# ls -la /etc/mtab
-rw-r--r-- 1 root root 2024 Apr 17 10:44 /etc/mtab

But the "df -h" is trying to open that file with WRITE access mode !!!
strace df -h 2>&1 |grep open  | grep mtab
...
open("/etc/mtab", O*_RDWR*|O_CLOEXEC) = -1 EACCES (Permission denied)

On all other systems"df -h" opens that mtab file in O_RDONLY. Why this one
is different?

And as I said, regular "df" works fine and it usies O_RDONLY flag.

So the problem happens only with "df -h" or "df -P" . That doesn't make any
sense to me.

Has anyone seen anything like that?

thanks!
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Creating root LVM

2016-08-25 Thread Grzegorz Powiedziuk
We have lvroots on all our systems, in the past we had root without lvm and
honestly in both scenarios there always was a way to get to the data in
case of problems of course if all devices were available.

But it is always good to have unique vgnames  (we do hostname_vgroot for
example).
LVM actually made some migrations between different storage systems easier
(but that was FCP). Thanks to lvm we didn't have to worry about some
aspects (mounts were always referring to /dev/mapper/vgname-lvname etc no
matter what the underlying device names were)

boot I still usually leave outside of LVM

Greg





2016-08-23 21:20 GMT+02:00 Robert J Brenneman :

> It depends on what you consider to be the bigger problem.
>
> A) having to care upfront about what needs space where is hard. Make / one
> huge logical volume and don't worry about space until you need more, then
> just grow / online.
>
> B) fixing a machine that won't boot is hard. When /'s contents are smeared
> across multiple devices it is tough to get the machine to a working state
> again, often requiring the service of a dedicated emergency rescue system.
>
> Note that this is further complicated by cloud management tools that impose
> their own restrictions on system disk layouts. It seems like no matter what
> you prefer, your cloud tools won't handle your chosen file system layout
> unless you like one big vanilla / partition with ext3 and no lvm at all.
>
> On Tue, Aug 23, 2016, 13:26 Donald Russell  wrote:
>
> > We have rhel5 with rootvg and rootlv. That caused us some grief when a
> root
> > password was lost and we "simply wanted to mount it on another system".
> > Not so fast there skippy, all the systems have rootvg/lv so we had to
> work
> > around that... (Not rocket science, but inconvenient)
> >
> > Now, (upgrading to rehl7) we put the "basic Linux system" on a
> > simple-to-use Mod-9 and use LVM for application file systems and a few
> > others.  Now it's very simple to mount that / file system on another
> server
> > if necessary.
> >
> >
> >
> >
> >
> > On Tuesday, August 23, 2016, Michael Weiner 
> > wrote:
> >
> > > Good morning all,
> > >
> > > I was having a little debate yesterday and I want to get the experts on
> > > this list opinions.
> > >
> > > What's the best practice when it comes to the root directory?
> > >
> > > Is it acceptable and recommend to create an vgroot and lvroot so it is
> > > expandable?
> > >
> > > Or is it recommended to have the root directory as a regular directory
> > and
> > > not expandable.
> > >
> > > Thank you!
> > >
> > >
> > > Sent from my iPhone
> > >
> > >
> > > Sent from my iPhone
> > > --
> > > For LINUX-390 subscribe / signoff / archive access instructions,
> > > send email to lists...@vm.marist.edu  with the message:
> > > INFO LINUX-390 or visit
> > > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > > --
> > > For more information on Linux on System z, visit
> > > http://wiki.linuxvm.org/
> > >
> >
> >
> > --
> > Sent from iPhone Gmail Mobile
> >
> > --
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> > visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > --
> > For more information on Linux on System z, visit
> > http://wiki.linuxvm.org/
> >
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SWAP under SAN disk

2016-07-26 Thread Grzegorz Powiedziuk
sure
But I still recommend doing different levels of swap.
I always set 3 devices with different priorities

highest - vdisk   (just a couple of hundreds megs for tiny, occasional
 swapping)
medium - dasd (few gigs mdisk for real swapping if something happens and
vdisk is not enough)
low  - fcp ( huge disk for heavy emergency swapping)

Gregory

2016-07-26 11:20 GMT-04:00 Victor Echavarry Diaz 
:

> Our SLES 11SP4 use 3390 disk for swapping purposes. Can we use, also, SAN
> disk for swap?
>
> Regards,
>
> Victor Echavarry
>
> System Programmer
>
> EVERTEC, LLC
>
>
>
>
>
>
>
>
>
>
>
>
> WARNING: This email and any files transmitted with it are confidential and
> intended solely for the use of the individual or entity to whom they are
> addressed. If you have received this email in error please delete it
> immediately.
> Please note that any views or opinions presented in this email are solely
> those
> of the author and do not necessarily represent those of EVERTEC, Inc. or
> its
> affiliates. Finally, the integrity and security of this message cannot be
> guaranteed on the Internet, and as such EVERTEC, Inc. and its affiliates
> accept
> no liability for any damage caused by any virus transmitted by this email.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LVM on unpartitioned disks (Was: RE: SLES12 + EDEV + bug)

2016-07-05 Thread Grzegorz Powiedziuk
2016-07-05 13:17 GMT-04:00 Mark Post :

> 
> If you don't have a support agreement and can't get a fix that way, you
> can revert the update to lvm2:
> zypper in --oldpackage lvm2-${version}
> where ${version} is the prior version you had installed.
>
>
> Mark Post
>
>
And if your system is already down and you can't recover from backup, then
link volumes under a different sles machine, mount them  and chroot into it
and downgrade lvm as Mark said. That's how I've fixed one of my dead linux
guest.

Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Other migration problems (was Unable to login with ssh)

2016-06-29 Thread Grzegorz Powiedziuk
>From what Alan and Mark said, I understand that not working ssh might have
been just a small result of bigger problem - data corruption. So
practically anything else might go wrong too.
Gregory

2016-06-29 14:01 GMT-04:00 Duerbusch, Tom :

> I didn't want to hijack the original thread so
>
> The conclusion on the original thread was that you need to be on a
> supported release of Suse when you go to a new z13.  This involved SSH and
> I assume crypto hardware.
>
> We have been planing to go to a z13 this year.
> I've said that many times before as we are on a z890, and have planed to go
> to a z9, z10, z114, z12 and now z13.
>
> We have SLES 9, SLES 10 and SLES 11 running.
> (After WAVV in 2011, I did manage to sunset our SLES 7 and SLES 8 guests)
>
> I'm ok with not being able to use new features/functions that are on a
> z13/zVM 6, on the unsupported software, but I'm now concerned about what I
> might loose, or crash with the new hardware.
>
> SSH isn't much of a problem, as there are so few people that actually log
> on to Linux or use SSH with Linux applications.
>
> So, the question is:
> Is there any other problems that others have come across, when migrating
> unsupported SLES systems to a z13?
>
> Thanks
>
> Tom Duerbusch
> THD Consulting
>
> --
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Unable to login with ssh

2016-06-29 Thread Grzegorz Powiedziuk
That looks unusual.
How did you move those machines? Did you move it from one storage to
another as well? How? Or did they stay on the same storage and you've just
ipl-ed those on a new system Z?
Do those sles10 SP4 live on the same storage as other linux guests? Is it
dasd? Did you try to run fsck on those disks after you linked it (W) to
other guest?

Gregory


2016-06-29 5:42 GMT-04:00 van Sleeuwen, Berry <berry.vansleeu...@atos.net>:

> I did trace the login ssh process. I haven't tried the top sshd process
> yet. But that's because we have another problem now. (Apart from being ill
> the last few weeks.) We can't login to the machine at all anymore. So the
> problem with the machine gets worse.
>
> A few months ago we have moved to a z13 and zVM 6.3. All machines were
> moved successfully. But we now have problems with SLES10 SP4 machines. We
> could logon to them right after the move. At some point ssh failed. And
> later on the login fails entirely. Even during boot some services can’t be
> started anymore. It looks like the passwd is corrupted. But when I link the
> disks in another linux guest I can still read files like passwd, group and
> shadow.
>
> During boot we see some errors, such as:
>
> Starting D-BUS daemon Could not get password database information for UID
> of current process:
> User "???" unknown or no memory to allocate password entry
> Unknown username "haldaemon" in message bus configuration file
>
> Starting SSH daemon Privilege separation user sshd does not exist
> startproc:  exit status of parent of /usr/sbin/sshd: 255
>
> When I try to login to the affected machines users are not accepted
> anymore. Even in a 3270 console I can't login anymore.
>
> We have a couple of SLES10 SP2 machines without any problems, all SLES11
> machines function correctly. It looks like to be a problem specific to
> SLES10 SP4.
>
> Could there be an issue specifically related to SLES10 SP4 on zVM 6.3/z13?
>
> Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
> Berry van Sleeuwen
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
> Mark Post
> Sent: Friday, May 27, 2016 7:29 PM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: Unable to login with ssh
>
> >>> On 5/27/2016 at 08:57 AM, Grzegorz Powiedziuk <gpowiedz...@gmail.com>
> wrote:
> > One other thing you could try (saved me many times) but a bit trouble
> > some is doing some stracing.
>
> This was something I was going to suggest as well.
>
> > 1. ssh to the user@server and let it sit on the login 2. on the
> > server, do ps auxwww |grep sshd  and look for a new spawned process
>
> Personally I would just pick the "top most" ssh process started at boot
> time _before_ trying to connect over the network.
>
> -snip-
> > 3. strace -p 36203 &> logfile.x
>
> I would probably try strace -f -p 36203 -s500 -o strace.sshd
>
> You'll need to break out of it using ^c since the process won't terminate
> on its own.
>
> -snip->
>  5. Now examine the the trace by looking at the logile.x   (it will be a
> big
> > file).
>
> If you don't have terminal server access you'll probably need to access it
> via FTP since scp is not likely to work.  Which raises a perhaps
> interesting question.  From the 3270 console are you able to ssh/scp _out_
> of the system?  That would at least allow you to send things off the system
> to some place you can use "normal" tools.
>
>
> Mark Post
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions, send
> email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit http://wiki.linuxvm.org/
> This e-mail and the documents attached are confidential and intended
> solely for the addressee; it may also be privileged. If you receive this
> e-mail in error, please notify the sender immediately and destroy it. As
> its integrity cannot be secured on the Internet, Atos’ liability cannot be
> triggered for the message content. Although the sender endeavours to
> maintain a computer virus-free network, the sender does not warrant that
> this transmission is virus-free and will not be liable for any damages
> resulting from any virus transmitted. On all offers and agreements under
> which Atos Nederland B.V. supplies goods and/or services of whatever
> nature, the Terms of Delivery from Atos Nederl

Re: Unable to login with ssh

2016-05-27 Thread Grzegorz Powiedziuk
2016-05-27 8:57 GMT-04:00 Grzegorz Powiedziuk <gpowiedz...@gmail.com>:

> One other thing you could try (saved me many times) but a bit trouble some
> is doing some stracing.
>
> 1. ssh to the user@server and let it sit on the login
> 2. on the server, do ps auxwww |grep sshd  and look for a new spawned
> process
>
> [root@localhost ~]# ps auxwww |grep sshd  |grep gpow
> root  36195  0.0  1.7 150104  8324 ?Ss   08:50   0:00 sshd:
> gpowiedziuk [priv]
> sshd  36196  0.0  1.2  87072  5908 ?S08:50   0:00 sshd:
> gpowiedziuk [net]
>
> In my case the first, one owned by root was the one that had all the
> interesting stuff in it so I did
>
> 3. strace -p 36203 &> logfile.x
>
> 4. on the login prompt, provide password and wait until it exits
>
> 5. Now examine the the trace by looking at the logile.x   (it will be a
> big file).
>
>
> somhowe my gmail decided to send the email before I was finished

In the trace you will see some some interesting stuff .. including password
:)
But anyway, grep that file for "open". I would say that what matters most.
Look closely which files are being opened. Start checking those files for
configuration errors, starting from last opened.

Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Unable to login with ssh

2016-05-27 Thread Grzegorz Powiedziuk
One other thing you could try (saved me many times) but a bit trouble some
is doing some stracing.

1. ssh to the user@server and let it sit on the login
2. on the server, do ps auxwww |grep sshd  and look for a new spawned
process

[root@localhost ~]# ps auxwww |grep sshd  |grep gpow
root  36195  0.0  1.7 150104  8324 ?Ss   08:50   0:00 sshd:
gpowiedziuk [priv]
sshd  36196  0.0  1.2  87072  5908 ?S08:50   0:00 sshd:
gpowiedziuk [net]

In my case the first, one owned by root was the one that had all the
interesting stuff in it so I did

3. strace -p 36203 &> logfile.x

4. on the login prompt, provide password and wait until it exits

5. Now examine the the trace by looking at the logile.x   (it will be a big
file).





2016-05-26 10:31 GMT-04:00 van Sleeuwen, Berry <berry.vansleeu...@atos.net>:

> I did try that ssh option but that didn't help me.
>
> All filesystem and rights are correct.
>
> Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
> Berry van Sleeuwen
>
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
> Grzegorz Powiedziuk
> Sent: Thursday, May 26, 2016 3:45 PM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: Unable to login with ssh
>
> long shot but, does below do anything?
>
> ssh -t user@x.x.x.x bash -ix
>
>
> Besides that,
> All filesystems are RW and they are not out of space?
> does home directory have proper rights?
> You probably would see errors in /var/log/messeges but better double check
>
> Gregory
>
>
> 2016-05-26 7:45 GMT-04:00 van Sleeuwen, Berry <berry.vansleeu...@atos.net
> >:
>
> > Hi Michael,
> >
> > I tried, but that didn't help me any further.
> >
> > ...
> > debug1: session_by_channel: session 0 channel 0
> > debug1: session_input_channel_req: session 0 req shell
> > debug1: Setting controlling tty using TIOCSCTTY.
> > debug1: Received SIGCHLD.
> > debug1: session_by_pid: pid 2227
> > debug1: session_exit_message: session 0 channel 0 pid 2227
> > debug1: session_exit_message: release channel 0
> > debug1: session_pty_cleanup: session 0 release /dev/pts/0
> > debug1: session_by_channel: session 0 channel 0
> > debug1: session_close_by_channel: channel 0 child 0
> > debug1: session_close: session 0 pid 0
> > debug1: channel 0: free: server-session, nchannels 1 Received
> > disconnect from 10.193.116.43: 11: disconnected by user
> > debug1: do_cleanup
> > debug1: PAM: cleanup
> > debug1: PAM: closing session
> > debug1: PAM: deleting credentials
> >
> > Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
> > Berry van Sleeuwen
> >
> >
> > -Original Message-
> > From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
> > Michael MacIsaac
> > Sent: Thursday, May 26, 2016 1:27 PM
> > To: LINUX-390@VM.MARIST.EDU
> > Subject: Re: Unable to login with ssh
> >
> > Berry,
> >
> > Did you try turning debug on for sshd?  I find you get a lot more
> > information debugging from the server side than the client (and this
> > makes sense for security reasons).
> >
> > On a SLES system, in the file /etc/sysconfig/ssh, I add a '-d' to :
> >
> > SSHD_OPTS="-d"
> >
> > Then "service restart sshd".  You should immediately see debug info on
> > the console.
> >
> > Hope this helps
> >
> > -Mike M
> >
> > On Thu, May 26, 2016 at 7:14 AM, van Sleeuwen, Berry <
> > berry.vansleeu...@atos.net> wrote:
> >
> > > Hi Rick,
> > >
> > > I can logon on the VM console so I expect the shell and various
> > > profiles should be ok.
> > >
> > > It's (almost) the same when I logon through ssh with another userid.
> > >
> > > Password:
> > > debug2: input_userauth_info_req
> > > debug2: input_userauth_info_req: num_prompts 0
> > > debug1: Authentication succeeded (keyboard-interactive).
> > > Authenticated to x.x.x.x ([x.x.x.x]:22).
> > > debug1: channel 0: new [client-session]
> > > debug2: channel 0: send open
> > > debug1: Requesting no-more-sessi...@openssh.com
> > > debug1: Entering interactive session.
> > > Write failed: Broken pipe
> > >
> > > Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
> > > Berry van Sleeuwen
> > >
> > > -Original Message-
> > > From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf
> > > Of Rick Troth
> > > Sent: Thursday

Re: Unable to login with ssh

2016-05-26 Thread Grzegorz Powiedziuk
Any recent changes to /etc/pam.d/sshd ?
What do you have over there currently?

Gregory

2016-05-26 10:31 GMT-04:00 van Sleeuwen, Berry <berry.vansleeu...@atos.net>:

> I did try that ssh option but that didn't help me.
>
> All filesystem and rights are correct.
>
> Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
> Berry van Sleeuwen
>
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
> Grzegorz Powiedziuk
> Sent: Thursday, May 26, 2016 3:45 PM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: Unable to login with ssh
>
> long shot but, does below do anything?
>
> ssh -t user@x.x.x.x bash -ix
>
>
> Besides that,
> All filesystems are RW and they are not out of space?
> does home directory have proper rights?
> You probably would see errors in /var/log/messeges but better double check
>
> Gregory
>
>
> 2016-05-26 7:45 GMT-04:00 van Sleeuwen, Berry <berry.vansleeu...@atos.net
> >:
>
> > Hi Michael,
> >
> > I tried, but that didn't help me any further.
> >
> > ...
> > debug1: session_by_channel: session 0 channel 0
> > debug1: session_input_channel_req: session 0 req shell
> > debug1: Setting controlling tty using TIOCSCTTY.
> > debug1: Received SIGCHLD.
> > debug1: session_by_pid: pid 2227
> > debug1: session_exit_message: session 0 channel 0 pid 2227
> > debug1: session_exit_message: release channel 0
> > debug1: session_pty_cleanup: session 0 release /dev/pts/0
> > debug1: session_by_channel: session 0 channel 0
> > debug1: session_close_by_channel: channel 0 child 0
> > debug1: session_close: session 0 pid 0
> > debug1: channel 0: free: server-session, nchannels 1 Received
> > disconnect from 10.193.116.43: 11: disconnected by user
> > debug1: do_cleanup
> > debug1: PAM: cleanup
> > debug1: PAM: closing session
> > debug1: PAM: deleting credentials
> >
> > Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
> > Berry van Sleeuwen
> >
> >
> > -Original Message-
> > From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
> > Michael MacIsaac
> > Sent: Thursday, May 26, 2016 1:27 PM
> > To: LINUX-390@VM.MARIST.EDU
> > Subject: Re: Unable to login with ssh
> >
> > Berry,
> >
> > Did you try turning debug on for sshd?  I find you get a lot more
> > information debugging from the server side than the client (and this
> > makes sense for security reasons).
> >
> > On a SLES system, in the file /etc/sysconfig/ssh, I add a '-d' to :
> >
> > SSHD_OPTS="-d"
> >
> > Then "service restart sshd".  You should immediately see debug info on
> > the console.
> >
> > Hope this helps
> >
> > -Mike M
> >
> > On Thu, May 26, 2016 at 7:14 AM, van Sleeuwen, Berry <
> > berry.vansleeu...@atos.net> wrote:
> >
> > > Hi Rick,
> > >
> > > I can logon on the VM console so I expect the shell and various
> > > profiles should be ok.
> > >
> > > It's (almost) the same when I logon through ssh with another userid.
> > >
> > > Password:
> > > debug2: input_userauth_info_req
> > > debug2: input_userauth_info_req: num_prompts 0
> > > debug1: Authentication succeeded (keyboard-interactive).
> > > Authenticated to x.x.x.x ([x.x.x.x]:22).
> > > debug1: channel 0: new [client-session]
> > > debug2: channel 0: send open
> > > debug1: Requesting no-more-sessi...@openssh.com
> > > debug1: Entering interactive session.
> > > Write failed: Broken pipe
> > >
> > > Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
> > > Berry van Sleeuwen
> > >
> > > -Original Message-
> > > From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf
> > > Of Rick Troth
> > > Sent: Thursday, May 26, 2016 12:36 PM
> > > To: LINUX-390@VM.MARIST.EDU
> > > Subject: Re: Unable to login with ssh
> > >
> > > Bad shell? (exits right away)
> > >
> > > Can you sign on with a different ID?
> > >
> > > Check your .profile, .bashrc, and their equiv under /etc.
> > >
> > > Try the virtual console if you can't do these "checks" from another ID.
> > >
> > > I might also suggest looking into the ciphers available (which
> > > *have* changed recently), but since it shows "last login" that's
> > > probably not the problem.
> > >

Re: Unable to login with ssh

2016-05-26 Thread Grzegorz Powiedziuk
long shot but, does below do anything?

ssh -t user@x.x.x.x bash -ix


Besides that,
All filesystems are RW and they are not out of space?
does home directory have proper rights?
You probably would see errors in /var/log/messeges but better double check

Gregory


2016-05-26 7:45 GMT-04:00 van Sleeuwen, Berry :

> Hi Michael,
>
> I tried, but that didn't help me any further.
>
> ...
> debug1: session_by_channel: session 0 channel 0
> debug1: session_input_channel_req: session 0 req shell
> debug1: Setting controlling tty using TIOCSCTTY.
> debug1: Received SIGCHLD.
> debug1: session_by_pid: pid 2227
> debug1: session_exit_message: session 0 channel 0 pid 2227
> debug1: session_exit_message: release channel 0
> debug1: session_pty_cleanup: session 0 release /dev/pts/0
> debug1: session_by_channel: session 0 channel 0
> debug1: session_close_by_channel: channel 0 child 0
> debug1: session_close: session 0 pid 0
> debug1: channel 0: free: server-session, nchannels 1
> Received disconnect from 10.193.116.43: 11: disconnected by user
> debug1: do_cleanup
> debug1: PAM: cleanup
> debug1: PAM: closing session
> debug1: PAM: deleting credentials
>
> Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
> Berry van Sleeuwen
>
>
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
> Michael MacIsaac
> Sent: Thursday, May 26, 2016 1:27 PM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: Unable to login with ssh
>
> Berry,
>
> Did you try turning debug on for sshd?  I find you get a lot more
> information debugging from the server side than the client (and this makes
> sense for security reasons).
>
> On a SLES system, in the file /etc/sysconfig/ssh, I add a '-d' to :
>
> SSHD_OPTS="-d"
>
> Then "service restart sshd".  You should immediately see debug info on the
> console.
>
> Hope this helps
>
> -Mike M
>
> On Thu, May 26, 2016 at 7:14 AM, van Sleeuwen, Berry <
> berry.vansleeu...@atos.net> wrote:
>
> > Hi Rick,
> >
> > I can logon on the VM console so I expect the shell and various
> > profiles should be ok.
> >
> > It's (almost) the same when I logon through ssh with another userid.
> >
> > Password:
> > debug2: input_userauth_info_req
> > debug2: input_userauth_info_req: num_prompts 0
> > debug1: Authentication succeeded (keyboard-interactive).
> > Authenticated to x.x.x.x ([x.x.x.x]:22).
> > debug1: channel 0: new [client-session]
> > debug2: channel 0: send open
> > debug1: Requesting no-more-sessi...@openssh.com
> > debug1: Entering interactive session.
> > Write failed: Broken pipe
> >
> > Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
> > Berry van Sleeuwen
> >
> > -Original Message-
> > From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
> > Rick Troth
> > Sent: Thursday, May 26, 2016 12:36 PM
> > To: LINUX-390@VM.MARIST.EDU
> > Subject: Re: Unable to login with ssh
> >
> > Bad shell? (exits right away)
> >
> > Can you sign on with a different ID?
> >
> > Check your .profile, .bashrc, and their equiv under /etc.
> >
> > Try the virtual console if you can't do these "checks" from another ID.
> >
> > I might also suggest looking into the ciphers available (which *have*
> > changed recently), but since it shows "last login" that's probably not
> > the problem.
> >
> > -- R; <><
> >
> >
> > On May 26, 2016 5:07 AM, "van Sleeuwen, Berry"
> >  > >
> > wrote:
> >
> > > Hi All,
> > >
> > > I try to login to a SLES10 linux guest. It looks like the login is
> > > correct but right after that the session is closed. I have tried to
> > > ssh from another linux guest, with trace options. But it is still
> > > unclear to me why the session is aborted. I have tried to find some
> > > clues in various locations but a lot of those solutions are either
> > > not applicable or do not help to solve the issue.
> > >
> > > Why would this session be closed and how can I fix this?
> > >
> > >  > > cyphers>
> > > Password:
> > > debug3: packet_send2: adding 32 (len 20 padlen 12 extra_pad 64)
> > > debug2: input_userauth_info_req
> > > debug2: input_userauth_info_req: num_prompts 0
> > > debug3: packet_send2: adding 48 (len 10 padlen 6 extra_pad 64)
> > > debug1: Authentication succeeded (keyboard-interactive).
> > > Authenticated to x.x.x.x ([y.y.y.y]:22).
> > >
> > > 
> > > 
> > >  > > after that the session close starts>
> > >
> > > debug2: channel 0: open confirm rwindow 0 rmax 32768
> > > debug2: channel_input_status_confirm: type 99 id 0
> > > debug2: PTY allocation request accepted on channel 0
> > > debug2: channel 0: rcvd adjust 2097152
> > > debug2: channel_input_status_confirm: type 99 id 0
> > > debug2: shell request accepted on channel 0
> > > debug1: client_input_channel_req: channel 0 rtype exit-signal reply
> > > 0
> > > debug1: client_input_channel_req: channel 0 rtype e...@openssh.com
> > > reply 0
> > > debug2: channel 0: rcvd eow
> > > debug2: channel 

Re: x-11 on SLES

2016-05-05 Thread Grzegorz Powiedziuk
2016-05-05 9:11 GMT-04:00 Grzegorz Powiedziuk <gpowiedz...@gmail.com>:

>
>
> I just would like to mention about MobaXterm which is one of "my best
> discoveries" during last few years when it comes to software (personal
> edition is free)
> It is a terminal application like putty but it has extremely useful
> features which automate many things like ssh tunnels and X11 forwarding.
> This thing has a built in cygwin and X server.
>
> So you just start it up in windows (you don't even have to install it) ,
> connect to your linux server like you do with any other terminal
> application and when you start an app that makes calls to X server it just
> works an pops up on your screen.
>
>
I need to clear up something (someone just asked me about this)
Above I meant that you don't have to do install it like most traditional
windows programs (msi installer etc)
It is a stand alone "exe" which you have to download first from their
website (pick portable version).
It will be a zip file. So just unzip it and run. Of course if you prefer an
install version, you can download one too.

Gregory

>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: x-11 on SLES

2016-05-05 Thread Grzegorz Powiedziuk
2016-05-04 11:31 GMT-04:00 Frank Wolfe :

> Good day all
>
> Has anyone setup x-11 on SLES for Linux on z? Can it be done?
>
> Have a good day
>
> Fank
>
>

I just would like to mention about MobaXterm which is one of "my best
discoveries" during last few years when it comes to software (personal
edition is free)
It is a terminal application like putty but it has extremely useful
features which automate many things like ssh tunnels and X11 forwarding.
This thing has a built in cygwin and X server.

So you just start it up in windows (you don't even have to install it) ,
connect to your linux server like you do with any other terminal
application and when you start an app that makes calls to X server it just
works an pops up on your screen.

Before that I was using VNC, XMing or even a dedicated linux desktop
virtual machine. But this is far best one and easiest to use.

It also has many other cool features (for example it automatically makes an
scp/sftp connection so you always have "windows like" directory structure
next to the terminal for easy drag and drop files between your workstation
and linux server.

I just wish they throw in c3270 one day and I need nothing more.

Give it a try, this program is like a virus - everyone I work with, uses it
already.

Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Install not working for SLES11 SP4 thru TRUNKED VSWITCH

2016-04-21 Thread Grzegorz Powiedziuk
2016-04-21 16:29 GMT-04:00 Mark Post <mp...@suse.com>:

> >>> On 4/21/2016 at 03:38 PM, Grzegorz Powiedziuk <gpowiedz...@gmail.com>
> wrote:
> -snip-
> > I believe Mark said that having linux to handle vlan tagging is hard.
> >
> > But what you are trying to do is different. In your case, vswitch is
> > removing/adding vlan tags from/to  frames on the fly.
>
> If Grzegorz is correct in _his_ assumption, what he posted is a good
> description of what goes on and that _is_ something the installer can
> handle with no problems since from the guest's perspective it is not VLAN
> aware.
>
>
You're right. That is just assumption and after I read again the whole
thread I am not sure anymore  if this is what Dave wanted
So to give a full picture, you have three options ( simplified ):

1. Basic - Osa is plugged into "access" port on the real switch so there
are no vlan tags between real switch and OSA, and there are no vlan tags in
vswitch or linux. VSWITCH has to be set as "vlan unware".

2. Most common and useful - OSA is plugged into a "trunk" port on a real
switch. That port has to have vlan(s) added to it.  VSWITCH handles vlan
tags. So between vswitch (osa) and real switch the traffic is tagged with
vlan ids. Vswitch removes and adds vlan tags depending on direction.
Traffic between VSWITCH and linux guest is a untagged - standard network
frames. So from linux perspective it is like (1) . VSWITCH has to be set as
vlan AWARE and grants should have a vlan tag.

3. Less common, useful in some cases - OSA is plugged into "trunk" port on
real switch and in general same as (2). But, when you do grant, you can say
that this specific grant should act as "porttype trunk" (and you specify
which vlans are trunked)  so VSWITCH instead of removing the vlan tag,
forwards the whole thing to linux guest. So linux guest should be
configured to receive and send tagged frames.  As Mark mentioned, during
install process it might be troublesome.
In that case you might want to change it to (2) just for the install and
later configure vlan interface in the linux and redefine your grant to
porttype trunk afterwards.

Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Simple DASD question

2016-04-21 Thread Grzegorz Powiedziuk
I cant remember how it is during install but I would make a bet that
default partitioning will not use more than one drive. Did you try to go
into advanced partitioning and see what the installator sugests? You should
be able manualy create one big lvm with seperate partition for boot (or you
can include boot in lvm but not all distros/versions support this). Unless
you want to do btrfs (I usually stick to ext3/4) then as we agreed lvm
doesn't make sense and btrfs volume pools should be used instead. But I
haven't done this before so can't tell.

But nontheless you still should be able to change it now if you do it step
by step from Mark's instructions.
So first make sure that second dasd is activated and available after ipl
(yast)
then pvcreate, vgextend, lvextend and filesystem resize.
Gregory

On Thursday, April 21, 2016, Tom Huegel <tehue...@gmail.com> wrote:

> I don't know.. I've done a couple of re-installs and I still don't
> see dasdb (251) being used.
> This is SLES12 SP1. I activate and format my two drives and then take all
> of the defaults.
>
> 蘿
>
> On Thu, Apr 21, 2016 at 11:50 AM, Tom Huegel <tehue...@gmail.com
> <javascript:;>> wrote:
>
> > Since this is a new install I'll just go back to step one and start with
> > reinstalling..
> > I'll let you know how it works out.
> >
> > On Thu, Apr 21, 2016 at 11:08 AM, Robert J Brenneman <bren...@gmail.com
> <javascript:;>>
> > wrote:
> >
> >> On Thu, Apr 21, 2016 at 2:00 PM, Grzegorz Powiedziuk <
> >> gpowiedz...@gmail.com <javascript:;>>
> >> wrote:
> >>
> >> > I see. I never used btrfs before so it makes sense.
> >> > So  isn't using LVM with btrfs on top of it complicating things
> >> more?
> >> >
> >>
> >> Basically yes.
> >>
> >> The default SLES 12 install does not use LVM though, it only uses the
> >> btrfs
> >> functions to achieve similar results.
> >>
> >> Doing both LVM and btrfs is not impossible, but you need to be
> thoughtful
> >> about how you combine those two.
> >>
> >> --
> >> Jay Brenneman
> >>
> >> --
> >> For LINUX-390 subscribe / signoff / archive access instructions,
> >> send email to lists...@vm.marist.edu <javascript:;> with the message:
> INFO LINUX-390 or
> >> visit
> >> http://www.marist.edu/htbin/wlvindex?LINUX-390
> >> --
> >> For more information on Linux on System z, visit
> >> http://wiki.linuxvm.org/
> >>
> >
> >
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu <javascript:;> with the message:
> INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Install not working for SLES11 SP4 thru TRUNKED VSWITCH

2016-04-21 Thread Grzegorz Powiedziuk
2016-04-21 14:52 GMT-04:00 Dave Myers :

> Since Mark has told us that what we're attempting to do is not supported
> by the RAM system...I'll close this thread and thank you all for your
> feedback.
>
> I do have one last question for Alan.
>
> In this statement does the VLAN AWARE specify TRUNKING to the uplink OSA ?
> If not what is the difference between what we coded VLAN 229 and  VLAN
> AWARE?
>
> DEFINE VSWITCH VSW1  RDEV 400.P1  VLAN AWARE  NATIVE NONE
>
> Thanks,
> Dave
>
>
I believe Mark said that having linux to handle vlan tagging is hard.

But what you are trying to do is different. In your case, vswitch is
removing/adding vlan tags from/to  frames on the fly.

To do what Mark mentioned (and I doubt this is what you want to do) you
have to specify porttype trunk when you do the grant command. This will let
linux guest to see vlan tags. But I am pretty sure that you don't want to
do this.

In your case, you let the vswitch to do vlan tagging and untaging. So
vswitch has to be vlan aware so it can receive and understand tagged frames
from the outside real, switch.
When it receives a tagged frame from  the outside world it removes the vlan
tag, and send the actual frame to guests which have  "granted" access to
that speciffic vlan.  And when vswitch receives an untaged frame from
guest, it adds a vlan tag to it and sends it to real switch (trunk port
specifically) .
So from linux perspective, there are no vlan tags and everything is nice an
easy.

So what you are doing, should work perfectly fine as long the real switch
is set up correctly (port is set to trunk and your vlan was added to it)

When it comes to syntax

define vswitch vlan aware vs vswitch .. vlan xxx I don't believe makes any
difference in your case
According to manual
"defvid
defines the virtual switch as a VLAN-aware switch supporting IEEE standard
802.lQ. The defvid defines the default VLAN ID to be assigned to guest
ports when no VLAN ID is coded on the SET VSWITCH GRANT VLAN command"


Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Simple DASD question

2016-04-21 Thread Grzegorz Powiedziuk
2016-04-21 13:39 GMT-04:00 Tom Huegel :

> Oh well I must have done something wrong, it won't boot (IPL) now.
> Booting default
> (grub2)
>
>
>
> [  OK  ] Found device
> /dev/disk/by-uuid/f75ec84d-5640-4a2c-bced-e026dd7ec2b9.
>
> 
> Warning: /dev/disk/by-uuid/2103a597-f950-4b63-931e-675d2c52dbd9 does not
> exist
>
>
>
It seems like the device is still missing. Did you run yast or
dasd_configure before doing all that and did it update initrd to have the
device available at boot time?
In dracut try a couple of basic commands like lsdasd  or cat
/proc/dasd/devices  to see if the second disk is actually there (probably
not)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Simple DASD question

2016-04-21 Thread Grzegorz Powiedziuk
2016-04-21 13:27 GMT-04:00 Robert J Brenneman :

>
> That's part of what the new btrfs in SLES 12 does for you. It does
> management of multiple physical block devices as a single logical entity,
> as well as talking that single logical entity and using it to back multiple
> mount points ( now called subvolumes ? ) to do things like snapshot just
> the /home section so you can do a rollback if you need to.
>
> It's part of the new SLES support to restore off service if required.
>
>
I see. I never used btrfs before so it makes sense.
So  isn't using LVM with btrfs on top of it complicating things more?

Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Simple DASD question

2016-04-21 Thread Grzegorz Powiedziuk
I am not sure about "PVS"  but "FREE" will work
You want to extend the LV so it should be rather something like this:

lvextend -l +100%FREE /dev/system/root(or /dev/mapper/system-root)

(/dev/system is a Volume group which from I can see has been already
extended - it has 7GB free )

Of course it that is what you want to do? Extend the size of root logical
volume?
Next you will have to resize the filesystem. If it is ext3 or ext4 you can
just do resize2fs /dev/system/root   and it will use all free space.


But, what I don't like, is the output of your "df" command. Nobody said
anything about it before so I guess it is ok, but I've never seen something
like this:

/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /var/tmp
/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /var/spool
/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /var/opt
/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /var/log
/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /var/lib/pgsql
/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /var/lib/mailman
/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /var/crash
/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /usr/local
/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /var/lib/named
/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /tmp
/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /srv
/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /opt
/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /home
/dev/mapper/system-root  5.3G  3.0G  2.0G  60% /boot/grub2/s390x-emu


According to this,  the same logical volume is mounted multiple times.
Honestly it doesn't make sense to me.
There should be just one mount point if you have only one logical volume
"root" without separate volumes for other mount points.
/dev/mapper/system-root  xxx xxx xxx 60% /

And that's it.


Gregory

2016-04-21 12:04 GMT-04:00 Tom Huegel :

> Now I need just a little more help.
> Would this be correct to use all of the new volume? " lvextend -l +100%PVS
> /dev/system "
> Thanks
>
> sles12:~ # vgdisplay
>   --- Volume group ---
>   VG Name   system
>   System ID
>   Formatlvm2
>   Metadata Areas2
>   Metadata Sequence No  4
>   VG Access read/write
>   VG Status resizable
>   MAX LV0
>   Cur LV2
>   Open LV   2
>   Max PV0
>   Cur PV2
>   Act PV2
>   VG Size   13.55 GiB
>   PE Size   4.00 MiB
>   Total PE  3470
>   Alloc PE / Size   1709 / 6.68 GiB
>   Free  PE / Size   1761 / 6.88 GiB
>   VG UUID   8wdm3K-4TNN-yqqn-v4bG-vBFE-INMc-AIZCxT
>
>
> On Wed, Apr 20, 2016 at 8:06 AM, Mark Post  wrote:
>
> > >>> On 4/19/2016 at 04:42 PM, Michael J Nash 
> wrote:
> > > Greeting Mark, please tell us why  use_lvmetad  has been disabled!
> >
> > It wasn't considered ready for enterprise use.
> >
> >
> > Mark Post
> >
> > --
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> > visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > --
> > For more information on Linux on System z, visit
> > http://wiki.linuxvm.org/
> >
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ? FTP permissions on FEDORA ?

2016-03-25 Thread Grzegorz Powiedziuk
Sometimes, I can't believe that issues like that are still there. I
recently had a very similar issue with IBM KVM
I mean it is just a simple ftp url ... it shouldn't be that hard to parse
it even if user forgets or add to many of slashes somewhere out there.
IBM KVM for example didn't like like url like this:
 ftp://user:root@x.x.x.x/
I had an ftpd server which was masking the real path into "/"

That was to much for IBM KVM  (or actually Red Hat/Fedora which was under
the hood). I've uncompressed the initrd and found the script which was
parsing the url and  as far as I remember it needed /something/ to work
properly ;)


2016-03-25 15:52 GMT-04:00 Tom Huegel <tehue...@gmail.com>:

> the extra slash "/" .. that did it..
>
> Thank-you everyone that helped me with this.. I really appreciate it. -:)
> Tom
>
> On Fri, Mar 25, 2016 at 12:42 PM, Offer Baruch <offerbar...@gmail.com>
> wrote:
>
> > Try adding an extra / at the beginning of the path:
> > ftp://root:pass4root@172.17.51.126//var/ftp/linins/
> >
> > That is a common issue as well...
> >
> > Offer
> > On Mar 25, 2016 10:18 PM, "Grzegorz Powiedziuk" <gpowiedz...@gmail.com>
> > wrote:
> >
> > > ok, now it is is probably one of many very common "little" issues with
> > > paths and proper syntax.
> > > Take your exact path from your "inst.repo" and paste it into your
> > > webbrowser and make sure that you can see .treeinfo file in there. Can
> > you?
> > > Those installators are very sensitive about little things like missing
> or
> > > extra "/" at the end of the path.
> > >
> > > Also, another way to debug this is looking into ftp server log file
> while
> > > you run your install.
> > > In the log you should be seeing what exactly from FTP server's
> > perspective,
> > > the linux install is trying to open.
> > > This usually gives me an idea what am I  doing wrong.
> > >
> > >
> > > 2016-03-25 15:00 GMT-04:00 Tom Huegel <tehue...@gmail.com>:
> > >
> > > > Thank you Grzegorz that information is most appreciated.
> > > >  I am sure that would have been my next problem.
> > > >
> > > > Even after your suggested changes I still get this message..
> > > >
> > > > [7.441940] dracut-initqueue[665]: Warning: can't find installer
> > > > mainimage path in .treeinfo
> > > > [7.448315] dracut-initqueue[665]: % Total% Received % Xferd
> > > > Average Speed   TimeTime Time  Current
> > > > [7.448810] dracut-initqueue[665]: Dload  Upload   Total   Spent
> > > > Left  Speed
> > > > [7.466071] dracut-initqueue[665]: 0 00 00 0
> > > > 0  0 --:--:-- --:--:-- --:--:-- 0   0 00
> > > >   00 0  0  0 --:--:-- --:--:-- --:--:--
> > > > 0
> > > > [7.466511] dracut-initqueue[665]: curl: (9) Server denied you to
> > > change
> > > > to the given directory
> > > > [7.468632] dracut-initqueue[665]: Warning: Downloading '
> > > >
> ftp://root:pass4root@172.17.51.126/var/ftp/linins/LiveOS/squashfs.img'
> > f
> > > > ailed!
> > > >
> > > >
> > > > On Fri, Mar 25, 2016 at 10:05 AM, Grzegorz Powiedziuk <
> > > > gpowiedz...@gmail.com
> > > > > wrote:
> > > >
> > > > > update,
> > > > > And your ins.repo should be pointing to that directory where iso is
> > > > mounted
> > > > > on ftp so:
> > > > > ftp://./linins/  not to the iso image.
> > > > > Gregory
> > > > >
> > > > > 2016-03-25 13:02 GMT-04:00 Grzegorz Powiedziuk <
> > gpowiedz...@gmail.com
> > > >:
> > > > >
> > > > > > the ISO image have to be mounted with a loop back device  for
> > example
> > > > > > mount -o loop /downloads/rhel-server-7.2-s390x-dvd.iso
> > > /var/ftp/linins/
> > > > > > And then you ftp into it so you can see actual content of iso
> image
> > > > > > Gregory
> > > > > >
> > > > > >
> > > > > > 2016-03-25 12:57 GMT-04:00 Tom Huegel <tehue...@gmail.com>:
> > > > > >
> > > > > >> Background:
> > > > > >> I have FEDORA F23 running and it is working well.
> > > > > >> I want to

Re: ? FTP permissions on FEDORA ?

2016-03-25 Thread Grzegorz Powiedziuk
ok, now it is is probably one of many very common "little" issues with
paths and proper syntax.
Take your exact path from your "inst.repo" and paste it into your
webbrowser and make sure that you can see .treeinfo file in there. Can you?
Those installators are very sensitive about little things like missing or
extra "/" at the end of the path.

Also, another way to debug this is looking into ftp server log file while
you run your install.
In the log you should be seeing what exactly from FTP server's perspective,
the linux install is trying to open.
This usually gives me an idea what am I  doing wrong.


2016-03-25 15:00 GMT-04:00 Tom Huegel <tehue...@gmail.com>:

> Thank you Grzegorz that information is most appreciated.
>  I am sure that would have been my next problem.
>
> Even after your suggested changes I still get this message..
>
> [7.441940] dracut-initqueue[665]: Warning: can't find installer
> mainimage path in .treeinfo
> [7.448315] dracut-initqueue[665]: % Total% Received % Xferd
> Average Speed   TimeTime Time  Current
> [7.448810] dracut-initqueue[665]: Dload  Upload   Total   Spent
> Left  Speed
> [7.466071] dracut-initqueue[665]: 0 00 00 0
> 0  0 --:--:-- --:--:-- --:--:-- 0   0 00
>   00 0  0  0 --:--:-- --:--:-- --:--:--
> 0
> [7.466511] dracut-initqueue[665]: curl: (9) Server denied you to change
> to the given directory
> [7.468632] dracut-initqueue[665]: Warning: Downloading '
> ftp://root:pass4root@172.17.51.126/var/ftp/linins/LiveOS/squashfs.img' f
> ailed!
>
>
> On Fri, Mar 25, 2016 at 10:05 AM, Grzegorz Powiedziuk <
> gpowiedz...@gmail.com
> > wrote:
>
> > update,
> > And your ins.repo should be pointing to that directory where iso is
> mounted
> > on ftp so:
> > ftp://./linins/  not to the iso image.
> > Gregory
> >
> > 2016-03-25 13:02 GMT-04:00 Grzegorz Powiedziuk <gpowiedz...@gmail.com>:
> >
> > > the ISO image have to be mounted with a loop back device  for example
> > > mount -o loop /downloads/rhel-server-7.2-s390x-dvd.iso /var/ftp/linins/
> > > And then you ftp into it so you can see actual content of iso image
> > > Gregory
> > >
> > >
> > > 2016-03-25 12:57 GMT-04:00 Tom Huegel <tehue...@gmail.com>:
> > >
> > >> Background:
> > >> I have FEDORA F23 running and it is working well.
> > >> I want to use this as a FTP server for installing other distros ,,
> > RedHat
> > >> and SUSE.
> > >> I've downloaded the ISO's from the respective vendors.
> > >>
> > >> On F23 I have started vsftp, stopped the firewall, created a directory
> > in
> > >> /var/ftp/ called linins, mounted the ISO's there, chmod a=rwx
> > >> /var/ftp/linins
> > >>
> > >> Everything looks good.
> > >> From CMS I can FTP to the directory and download files.
> > >>
> > >> The Problem:
> > >> When I try to install either of the other OS's I get this error
> message:
> > >> " server denied you to change to the given directory"
> > >>
> > >> My path looks something like this:
> > >> inst.repo=
> > >>
> > >>
> >
> ftp://root:mypasswordt@172.17.51.126/var/ftp/linins/rhel-server-7.2-s390x-dvd.iso
> > >>  <==RedHat
> > >>
> > >> Any ideas what I am missing?
> > >> Thanks
> > >> Tom
> > >>
> > >> --
> > >> For LINUX-390 subscribe / signoff / archive access instructions,
> > >> send email to lists...@vm.marist.edu with the message: INFO LINUX-390
> > or
> > >> visit
> > >> http://www.marist.edu/htbin/wlvindex?LINUX-390
> > >> --
> > >> For more information on Linux on System z, visit
> > >> http://wiki.linuxvm.org/
> > >>
> > >
> > >
> >
> > --
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> > visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > --
> > For more information on Linux on System z, visit
> > http://wiki.linuxvm.org/
> >
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ? FTP permissions on FEDORA ?

2016-03-25 Thread Grzegorz Powiedziuk
update,
And your ins.repo should be pointing to that directory where iso is mounted
on ftp so:
ftp://./linins/  not to the iso image.
Gregory

2016-03-25 13:02 GMT-04:00 Grzegorz Powiedziuk <gpowiedz...@gmail.com>:

> the ISO image have to be mounted with a loop back device  for example
> mount -o loop /downloads/rhel-server-7.2-s390x-dvd.iso /var/ftp/linins/
> And then you ftp into it so you can see actual content of iso image
> Gregory
>
>
> 2016-03-25 12:57 GMT-04:00 Tom Huegel <tehue...@gmail.com>:
>
>> Background:
>> I have FEDORA F23 running and it is working well.
>> I want to use this as a FTP server for installing other distros ,, RedHat
>> and SUSE.
>> I've downloaded the ISO's from the respective vendors.
>>
>> On F23 I have started vsftp, stopped the firewall, created a directory in
>> /var/ftp/ called linins, mounted the ISO's there, chmod a=rwx
>> /var/ftp/linins
>>
>> Everything looks good.
>> From CMS I can FTP to the directory and download files.
>>
>> The Problem:
>> When I try to install either of the other OS's I get this error message:
>> " server denied you to change to the given directory"
>>
>> My path looks something like this:
>> inst.repo=
>>
>> ftp://root:mypasswordt@172.17.51.126/var/ftp/linins/rhel-server-7.2-s390x-dvd.iso
>>  <==RedHat
>>
>> Any ideas what I am missing?
>> Thanks
>> Tom
>>
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
>>
>
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ? FTP permissions on FEDORA ?

2016-03-25 Thread Grzegorz Powiedziuk
the ISO image have to be mounted with a loop back device  for example
mount -o loop /downloads/rhel-server-7.2-s390x-dvd.iso /var/ftp/linins/
And then you ftp into it so you can see actual content of iso image
Gregory


2016-03-25 12:57 GMT-04:00 Tom Huegel :

> Background:
> I have FEDORA F23 running and it is working well.
> I want to use this as a FTP server for installing other distros ,, RedHat
> and SUSE.
> I've downloaded the ISO's from the respective vendors.
>
> On F23 I have started vsftp, stopped the firewall, created a directory in
> /var/ftp/ called linins, mounted the ISO's there, chmod a=rwx
> /var/ftp/linins
>
> Everything looks good.
> From CMS I can FTP to the directory and download files.
>
> The Problem:
> When I try to install either of the other OS's I get this error message:
> " server denied you to change to the given directory"
>
> My path looks something like this:
> inst.repo=
>
> ftp://root:mypasswordt@172.17.51.126/var/ftp/linins/rhel-server-7.2-s390x-dvd.iso
>  <==RedHat
>
> Any ideas what I am missing?
> Thanks
> Tom
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LINUX SSH problem.

2016-03-03 Thread Grzegorz Powiedziuk
In fedora21 (at least on x86) they switched to journualctl with logs and
stuff no longer goes to regular log files.
I am not sure with s390 fedora they did the same thing.

Try something like
journalctl -u sshd --since=yesterday | tail -100

I would also try to login from a different linux machine with "ssh -vvv
x.x.x.x "  for verbose and see if there is anything there.

Someone mentioned problems with RW permissions to home directory. That
makes sense.

Also if home filesystem is full or mounted RO, you might have the same
issue.

Gregory


2016-03-03 12:21 GMT-05:00 Offer Baruch :

> I think that ssh does not allow for your home directory to be write enabled
> for the group (i have seen this on redhat).
> Make sure your home directory has the correct permissions...
>
> Good luck
> Offer Baruch
> On Mar 3, 2016 5:36 PM, "Tom Huegel"  wrote:
>
> > I wish I had some idea of what I might have changed. Intentionally I
> > haven't changed anything.
> > There are no new messages in the /var/log/messages file after a failed
> > logon.
> >
> > Comparing /etc/parm.d/sshd to another system that allows SSH logons ...
> > they are identical.
> > cat
> > /etc/pam.d/sshd
> >
> > #%PAM-1.0
> >
> > auth   required
> > pam_sepermit.so
> > auth   substack
> > password-auth
> > auth   include
> > postlogin
> > # Used with polkit to reauthorize users in remote
> > sessions
> > -auth  optional pam_reauthorize.so
> > prepare
> > accountrequired
> > pam_nologin.so
> > accountinclude
> > password-auth
> > password   include
> > password-auth
> > # pam_selinux.so close should be the first session
> > rule
> > sessionrequired pam_selinux.so
> > close
> > sessionrequired
> > pam_loginuid.so
> > # pam_selinux.so open should only be followed by sessions to be executed
> in
> > the user context
> > sessionrequired pam_selinux.so open
> > env_params
> > sessionoptional pam_keyinit.so force
> > revoke
> > sessioninclude
> > password-auth
> > sessioninclude
> > postlogin
> > # Used with polkit to reauthorize users in remote
> > sessions
> > -session   optional pam_reauthorize.so
> > prepare
> >
> >
> >
> >
> >
> >
> > On Thu, Mar 3, 2016 at 7:11 AM, van Sleeuwen, Berry <
> > berry.vansleeu...@atos.net> wrote:
> >
> > > Hi Tom,
> > >
> > > Could it be the pam configuration for ssh is changed? Perhaps the
> > password
> > > checking in pam?
> > >
> > > I once had such an issue when I made a typo in /etc/pam.d/sshd. After
> > this
> > > I couldn't login anymore. It showed up in the console log as "Error:
> PAM:
> > > Module is unknown for  from .". (This might be in
> > > /var/log/messages as well.)
> > >
> > > I had to correct the typo using "sed" in the Linux console.
> > >
> > > Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
> > > Berry van Sleeuwen
> > >
> > > -Original Message-
> > > From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
> > Tom
> > > Huegel
> > > Sent: Thursday, March 03, 2016 3:23 PM
> > > To: LINUX-390@VM.MARIST.EDU
> > > Subject: LINUX SSH problem.
> > >
> > > This seems strange to me (a LINUX novice) but I have FEDORA f21 system
> > > that has been working fine until recently.
> > > It seems strange LINUX starts up just fine but when I try to SSH
> > > (Putty) into it I get the initial logon screen but the password is
> always
> > > rejected.
> > > From the z/VM console I can logon using the same password.
> > >
> > > I must have touched something *&&*&%.
> > > Any idea how to fix it?
> > > Thanks
> > > Tom
> > >
> > > --
> > > For LINUX-390 subscribe / signoff / archive access instructions, send
> > > email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> > visit
> > > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > > --
> > > For more information on Linux on System z, visit
> > http://wiki.linuxvm.org/
> > > This e-mail and the documents attached are confidential and intended
> > > solely for the addressee; it may also be privileged. If you receive
> this
> > > e-mail in error, please notify the sender immediately and destroy it.
> As
> > > its integrity cannot be secured on the Internet, Atos’ liability cannot
> > be
> > > triggered for the message content. Although the sender endeavours to
> > > maintain a computer virus-free network, the sender does not warrant
> that
> > > this transmission is virus-free and will not be liable for any damages
> > > resulting from any virus transmitted. On all offers and agreements
> under
> > > which Atos Nederland B.V. supplies goods and/or services of whatever
> > > nature, the Terms of Delivery from Atos Nederland B.V. exclusively
> apply.
> > > The Terms of Delivery shall be promptly submitted to you on your
> request.
> > >
> >
> > 

Re: TSM backup LAN connection problem

2016-02-09 Thread Grzegorz Powiedziuk
Is it only telnet acting like this? What about ping, netcat  (nc)? Does it
respond promptly?
usually delays with logins (ssh but I think telnet does the same thing) I
have caused by messed up DNS and reverse lookup entries. But here it is
different because it works fine after first try.

I think the default timeout for arp entries in linux is 60 sec
(cat /proc/sys/net/ipv4/neigh/default/gc_stale_time)
you can run "arp -n" to see if it is still there

Did you double check ip configuration and make sure that network, subnet
and broadcast addresses are correct?

Regards
Gregory

2016-02-09 4:04 GMT-05:00 Agblad Tore :

> Hi, just a long shot perhaps, checking if anyone else have had this
> problem and solved it:
>
> Two Linux servers running TSM(backup server) software, one x86 and on
> s390x (RHEL 6.7)
>
> Both have an extra NIC connected to a separate backup LAN, separate OSA
> adapters + separate cables all the way.
>
> Connection test method: telnet ipaddress 1500
>
> Connect from s390x via that vlan and port 1500 into the x86 server works,
> but takes 5-10 seconds first time. And again 5-10 seconds if no traffic for
> about 5 minutes.
>
> Connect from x86 same method does not work, unless a connection from s390x
> was made the last 5 minutes.
>
> It seems the arp cache in the x86 server is updated and that server finds
> it way after that, but timeout is 5 minutes, so after that it does not find
> it's way.
>
> I took a tcpdump on that backup lan interface, got some help understanding
> it using wireshark, and obviously the x86 connection tries does initiate an
> arp query to the s390x server that also replies with its mac address, as it
> should. Still this does not help.
> Route tables also looks as expected, should not cause a problem.
>
> Anyone having seen this problem ?
> If you also have the solution you will make my day :-)
>
> BR /Tore
>
> 
> Tore Agblad
> zOpen, IT Services
>
> Volvo Group Headquarters
> Corporate Process & IT
> SE-405 08, Gothenburg  Sweden
> E-mail: tore.agb...@volvo.com
> http://www.volvo.com/volvoit/global/en-gb/
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


cp3kvmxt / monwrite and storage info

2016-01-05 Thread Grzegorz Powiedziuk
Hello,
I am trying to capture performance data with cp3kvmxt and I am having
trouble to get anything more than CPU utilization.
The person who imports the EDF file says that his tool can't see storage
configuration for other LPARs and that it should. He can see storage
utilization only for a local z/VM instance.
The CPU utilization and configuration is there for all LPARs though.

Does monwrite data even contain storage information from other LPARs? I
don't remember seeing anywhere in performance monitor toolkit this
information and it uses same mondcss after all. In perfkit all I can see is
CPU stats from other LPARS (menu 8).


Here is what I do.

I have a user id with following directory statements required for using
mondcss:

   IUCV *MONITOR MSGLIMIT 255
   NAMESAVE MONDCSS

>From the userid id I did enable all necessary monitor domains:

q monitor
MONITOR EVENT ACTIVEBLOCK4 PARTITION8192
MONITOR DCSS NAME - MONDCSS
CONFIGURATION SIZE   68 LIMIT 1 MINUTES
CONFIGURATION AREA IS FREE
USERS CONNECTED TO *MONITOR - PERFSVM
  PERSMAPI
MONITOR   DOMAIN ENABLED
PROCESSOR DOMAIN ENABLED
STORAGE   DOMAIN ENABLED
SCHEDULER DOMAIN DISABLED
SEEKS DOMAIN DISABLED
USER  DOMAIN ENABLED
   ALL USERS ENABLED
I/O   DOMAIN ENABLED
   PCIF CLASS ENABLED
   ALL DEVICES ENABLED
NETWORK   DOMAIN ENABLED
ISFC  DOMAIN ENABLED
APPLDATA  DOMAIN ENABLED
   ALL USERS ENABLED
SSI   DOMAIN ENABLED
MONITOR SAMPLE ACTIVE
   INTERVAL1 MINUTES
   RATE 2.00 SECONDS
MONITOR DCSS NAME - MONDCSS
CONFIGURATION SIZE 4096 LIMIT 1 MINUTES
CONFIGURATION AREA IS FREE
USERS CONNECTED TO *MONITOR - PERFSVM
  PERSMAPI
MONITOR   DOMAIN ENABLED
SYSTEMDOMAIN ENABLED
PROCESSOR DOMAIN ENABLED NOCPUMFC
STORAGE   DOMAIN ENABLED
USER  DOMAIN ENABLED
   ALL USERS ENABLED
I/O   DOMAIN ENABLED
   PCIF CLASS ENABLED
   ALL DEVICES ENABLED
NETWORK   DOMAIN ENABLED
ISFC  DOMAIN ENABLED
APPLDATA  DOMAIN ENABLED
   ALL USERS ENABLED
SSI   DOMAIN ENABLED



The mondscss segment size according to my calculations is pretty big:
q nss map
..
0012 MONDCSS  CPDCSS N/A09000  0CFFF
..
0CFF-09000=16383  (decimal) and 16383*4K (+1page) pages gives 64MB if I am
doing the math right.


I run the CP3KVMXT  with mondcss argument  and specify intervals and
timeframes manually (30 minutes intervals for 5 hours total run)

When it is done, it creates a bunch of files like:

D111815  ACTVUSRS A1
D111815  BCUDATA  A1
D111815  DEBUGA1
D111815  EDF  A1
D111815  SAMPSA1

There are no errors.
I've also tried to run monwrite first and use the result file as an input
for cp3kvmxt. Also no luck.

The EDF is the one I send to person with the tool to analyze it.

What am I doing wrong?

Thank you
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


hyptop question

2016-01-04 Thread Grzegorz Powiedziuk
Hi,
When I run hyptop I can see performance stats for virtual machines running
in the same z/VM LPAR.

Is it possible to see stats for virtual machines in other z/vm lpars (I
assume that the answer is "no" when I think about but I want to make sure).

At least I should be able to see stats for other LPARS from the PR/SM
perspective, right?
But I don't and I am not sure why.


My virtual machine has A-G priv, the global performance data control on SE
is "on", the debugfs is mounted, the s390_hypfs is mounted.

I can see diag_204 and diag_2fc inside of s390_hypfs and both have read
permissions.

OS: RHEL7

What am I missing?
Thanks!
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: hipersockets

2015-11-20 Thread Grzegorz Powiedziuk
ohhh I see what you are saying (I think so).

I was thinking about standard usage of vswitch bridge so a scenario where
all guests have real dedicated hipersockets and vswitch has one of these
hipersockets as well. This gives hipersocket virtual machines a way to talk
to each other (via hipersocket network and shared chpid) plus it gives them
access to oustide network via vswitch.

But you are saying (I think)  that I could have ONLY vswitch (on all lpars)
using the hipersocket bridge port  and all the guest have connected to the
same vswitch (layer two so oracle should be happy) using virtual qdio nic
cards? That sounds very cool. I don't have RACF yet but it will give me a
standard vswitch access list control of vlans which is something.

In scenario like this, I guess I don't need to specify uplink device for
the vswitch so I could keep it all inside of a CEC.

I wonder how much how much would it impact performance.

Thanks!
Gregory



2015-11-20 9:00 GMT-05:00 David Kreuter <dkreu...@vm-resources.com>:

> Gregory: It does help. As a VSWITCH it can be protected by RACF hence
> the VLANs can be RACF protected too.
> Another advantage is that it uses only one hipersocket triplet whereas
> dedicated hipersocket will end up using one triplet per Linux virtual.
> You can even use the same hiper triplet on each LPAR.
> There is a limit to how many LPARs can connect to the hiper. I've been
> scalded by this a few times.
> David
>
>
>  Original Message ----
> Subject: Re: hipersockets
> From: Grzegorz Powiedziuk <gpowiedz...@gmail.com>
> Date: Fri, November 20, 2015 8:47 am
> To: LINUX-390@VM.MARIST.EDU
>
> Thanks Alan.
> HiperSocket VSWITCH Bridge will not help when it comes to isolation,
> right? Vswitch simply acts as a bridge for hipersocket network using
> using
> one of the real hipersocket devices as one of its own interfaces (bridge
> port). Via this bridge Hipersocket network gets access to external
> network
> but doesn't give more control on who can talk to who inside of CEC, does
> it?
> thanks
> Gregory
>
>
>
> 2015-11-20 0:47 GMT-05:00 Alan Altmark <alan_altm...@us.ibm.com>:
>
> > On Thursday, 11/19/2015 at 08:35 GMT, Grzegorz Powiedziuk
> > <gpowiedz...@gmail.com> wrote:
> > > I thought about doing vswitch but then AFIK I would end up with with
> > > virtual hipersockets on linux guest.
> >
> > Linux guests can use real HiperSockets with the HiperSocket VSWITCH
> bridge
> > on z/VM. Their traffic will automatically be bridged to a physical LAN
> > that can be accessed by z/OS. z/OS doesn't support the HiperSocket
> > technology that would let it participate in a direct HiperSocket
> > connection with the Linux guests on the bridge.
> >
> > Alan Altmark
> >
> > Senior Managing z/VM and Linux Consultant
> > Lab Services System z Delivery Practice
> > IBM Systems & Technology Group
> > ibm.com/systems/services/labservices
> > office: 607.429.3323
> > mobile; 607.321.7556
> > alan_altm...@us.ibm.com
> > IBM Endicott
> >
> > --
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> > visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > --
> > For more information on Linux on System z, visit
> > http://wiki.linuxvm.org/
> >
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: hipersockets

2015-11-20 Thread Grzegorz Powiedziuk
Thanks Alan.
HiperSocket VSWITCH Bridge  will not help when it comes to isolation,
right? Vswitch simply acts as a bridge for hipersocket network using using
one of the real hipersocket devices as one of its own interfaces (bridge
port). Via this bridge Hipersocket network gets access to external network
but doesn't give more control on who can talk to who inside of CEC, does it?
thanks
Gregory



2015-11-20 0:47 GMT-05:00 Alan Altmark <alan_altm...@us.ibm.com>:

> On Thursday, 11/19/2015 at 08:35 GMT, Grzegorz Powiedziuk
> <gpowiedz...@gmail.com> wrote:
> > I thought about doing vswitch but then AFIK I would end up with with
> > virtual hipersockets on linux guest.
>
> Linux guests can use real HiperSockets with the HiperSocket VSWITCH bridge
> on z/VM.  Their traffic will automatically be bridged to a physical LAN
> that can be accessed by z/OS.   z/OS doesn't support the HiperSocket
> technology that would let it participate in a direct HiperSocket
> connection with the Linux guests on the bridge.
>
> Alan Altmark
>
> Senior Managing z/VM and Linux Consultant
> Lab Services System z Delivery Practice
> IBM Systems & Technology Group
> ibm.com/systems/services/labservices
> office: 607.429.3323
> mobile; 607.321.7556
> alan_altm...@us.ibm.com
> IBM Endicott
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


hipersockets

2015-11-19 Thread Grzegorz Powiedziuk
I am planning to use hipersockets for internal communication between oracle
nodes in each RAC cluster (oracle nodes run in different lpars).
I've never used real hipersockets before so I am no sure if I understand
this correctly.

>From what I've learned so far, In order to achieve this, we need to have a
shared chpid  between LPARS. Hipersockets on the same chpid can communicate
with each other.
Ok, we've done that. We have defined a set of hipersockets on one chipd for
every LPAR and it works. Linux in one LPAR can talk to another linux in
different lpar.

But we will have many more "pairs" like that. How to set up networking?
First, I thought about just assigning different networks to every pair for
example:

Cluster 1
Linux1 - 192.168.100.1/28  (or even smaller)
Linux2 - 192.168.100.2/28

Cluster 2
Linux3 - 192.168.200.1/28
Linux4 - 192.168.200.2/28

So only Linux1 can talks to Linux2 and only Linux3 can talk to Linux4

But...that's not really secure is it? If someone hacks in to Linux3 and
changes the ip address of it's hsi0 interface to match ip in cluster's 1
network, things might go wrong. They are on same chpid after all.

Do I need to have a separate chpid for every cluster? Doesn't really make
sense, does it?
Am I missing something?

Thanks
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: hipersockets

2015-11-19 Thread Grzegorz Powiedziuk
Thank you for a really quick answer Alan.
So I did get it right more or less. I didn't know that I can do vlans which
will make things cleaner to some extent.

But I was hopping for a different answer when it comes to security.
We will have at least non-prod and prod environments on separate chpids
then.

Thank you
Gregory


2015-11-19 15:05 GMT-05:00 Alan Altmark <alan_altm...@us.ibm.com>:

> On Thursday, 11/19/2015 at 07:38 GMT, Grzegorz Powiedziuk
> <gpowiedz...@gmail.com> wrote:
> > From what I've learned so far, In order to achieve this, we need to have
> a
> > shared chpid  between LPARS. Hipersockets on the same chpid can
> communicate
> > with each other.
>
> Hosts using the same VLAN on the same HiperSocket chpid can talk to each
> other.  There are no controls on the VLAN ID that a host is permitted to
> use, so from a security perspective, don't rely on HiperSocket VLAN
> controls.
>
> > Ok, we've done that. We have defined a set of hipersockets on one chipd
> for
> > every LPAR and it works. Linux in one LPAR can talk to another linux in
> > different lpar.
> :
> > Do I need to have a separate chpid for every cluster? Doesn't really
> make
> > sense, does it?
> > Am I missing something?
>
> It depends entirely on your security posture.  If you need enforced
> isolation of each pair, then you need one chpid per pair.
>
> Alan Altmark
>
> Senior Managing z/VM and Linux Consultant
> Lab Services System z Delivery Practice
> IBM Systems & Technology Group
> ibm.com/systems/services/labservices
> office: 607.429.3323
> mobile; 607.321.7556
> alan_altm...@us.ibm.com
> IBM Endicott
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: hipersockets

2015-11-19 Thread Grzegorz Powiedziuk
Thanks David.
I thought about doing vswitch but then AFIK I would end up with with
virtual hipersockets on linux guest. And I've read in IBMs redbook for
oracle 12:

IBM HiperSockets™ are certified and
supported for the private network. Only a network that is configured with
*real* HiperSockets is
possible, as z/VM guest LAN HiperSockets cannot be configured on layer 2,
which is required
for ARP.


Gregory


2015-11-19 15:20 GMT-05:00 David Kreuter <dkreu...@vm-resources.com>:

> Hi - I've done the hipersocket VLAN implementation. It works well and of
> course Alan's comments are correct.
>
> Another approach I've used is to create a VSWITCH on each LPAR using the
> same set of OSAs. Now when you use VLANs on this VSWITCH RACF can be
> involved for better protection.
>
> OK won't be as fast as hipersocket but it doesn't go far out of the box
> either.
> David Kreuter
>
>
>
>  Original Message 
> Subject: Re: hipersockets
> From: Alan Altmark <alan_altm...@us.ibm.com>
> Date: Thu, November 19, 2015 3:05 pm
> To: LINUX-390@VM.MARIST.EDU
>
> On Thursday, 11/19/2015 at 07:38 GMT, Grzegorz Powiedziuk
> <gpowiedz...@gmail.com> wrote:
> > From what I've learned so far, In order to achieve this, we need to have
> a
> > shared chpid between LPARS. Hipersockets on the same chpid can
> communicate
> > with each other.
>
> Hosts using the same VLAN on the same HiperSocket chpid can talk to each
>
> other. There are no controls on the VLAN ID that a host is permitted to
> use, so from a security perspective, don't rely on HiperSocket VLAN
> controls.
>
> > Ok, we've done that. We have defined a set of hipersockets on one chipd
> for
> > every LPAR and it works. Linux in one LPAR can talk to another linux in
> > different lpar.
> :
> > Do I need to have a separate chpid for every cluster? Doesn't really
> make
> > sense, does it?
> > Am I missing something?
>
> It depends entirely on your security posture. If you need enforced
> isolation of each pair, then you need one chpid per pair.
>
> Alan Altmark
>
> Senior Managing z/VM and Linux Consultant
> Lab Services System z Delivery Practice
> IBM Systems & Technology Group
> ibm.com/systems/services/labservices
> office: 607.429.3323
> mobile; 607.321.7556
> alan_altm...@us.ibm.com
> IBM Endicott
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Migrate zLinux off zVM into standalone LPARs?

2015-11-17 Thread Grzegorz Powiedziuk
If you can't do what Alan said, you can do ddr from z/vm or dd in linux
which have access to both - old and new storage  (the old linux should be
down during the clone process)

There are good chances that it will  ipl fine from new storage.
As Mark mentioned, it might depend how did you set it in fstab. But good
chances are that you use one of these in fstab:

/dev/dasdnx (asking for troubles)
/dev/disk/by-path  (I think most common for non-lvms?)
/dev/mapper/ (if LVM used)
My bet is that you have disk-by-path or/and /dev/mapper/.

If that's the case, no changes to fstab should be needed. It should ipl
without a problem.
When I think about it, disk/by-uuid should also work. Correct me if I am
wrong, but it's just an id of partition which will be cloned with all the
data.

If you use FCP then it's different and longer story.

Gregory Powiedziuk

2015-11-17 13:24 GMT-05:00 Alan Altmark :

> On Tuesday, 11/17/2015 at 10:14 EST, Ray Mansell  wrote:
> > Speaking of disk address changes... how does one cope with that? We have
> > a new DS8700 to replace our ancient DS8000s, and I'm wondering what is
> > the best way to migrate our LPAR Linux servers to the new box? I'm sure
> > this must have been done before, so I'd be very interested to learn how
> > others have managed this.
>
> Connect the two together, and let the controllers replicate that data to
> the new one, then pull/push.  You obviously need to do this when you can
> take the whole environment down.
>
> Alan Altmark
>
> Senior Managing z/VM and Linux Consultant
> Lab Services System z Delivery Practice
> IBM Systems & Technology Group
> ibm.com/systems/services/labservices
> office: 607.429.3323
> mobile; 607.321.7556
> alan_altm...@us.ibm.com
> IBM Endicott
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LVM usage

2015-10-06 Thread Grzegorz Powiedziuk
I would like also to mention that when it comes to LVM, it makes much more 
easier to manage such systems especially with FCP involved.
Imagine that you break your multipath config for example and you lose your 
“mpath” names. If you have LVM, you can still boot. LVM will just scan all your 
paths, pick the first (or last - can’t remember) available with a proper label 
and use that to bring up the vg and logical volumes. Without LVM, you will have 
to a lot of work to get system online again. 
There were few others scenarios which I can't remember now, when LVM saved me 
big time or made things a lot easier. 
 
Gregory Powiedziuk



> On Oct 6, 2015, at 3:34 AM, van Sleeuwen, Berry  
> wrote:
> 
> Actually, those were just examples. The Samba and TSM guests have the biggest 
> LVM filesystems. We have a quite a few linux guests, most of them use LVM. 
> Ranging from a small webserver to a couple of oracle database machines with 
> LVM's up to 40G.
> 
> I don’t think it matters if we are talking zVM guests or native linux 
> machines. The usage of LVM depends on what storage is available. Mainframe 
> DASD typically doesn't have the large disksizes so the smaller disks require 
> LVM or any other solution that glues disk together. But even when the storage 
> solution has the required sizes available there can be other reasons for 
> using LVM. We use LVM primarily for two reasons. First of all to glue smaller 
> disks together and secondly to spread IO onto multiple disks.
> 
> We use LVM only for user filesystems. The system data remains on non-LVM 
> disks. Most of the user data is located in /srv that is located in an LVM.
> 
> Met vriendelijke groet/With kind regards/Mit freundlichen Grüßen,
> Berry van Sleeuwen
> 
> -Original Message-
> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Sergey 
> Korzhevsky
> Sent: Monday, October 05, 2015 5:23 PM
> To: LINUX-390@VM.MARIST.EDU
> Subject: Re: LVM usage
> 
> Stories about TSM and Samba are great, but this is one installation for the 
> site and we are speaking in terms of z/VM, right?
> This e-mail and the documents attached are confidential and intended solely 
> for the addressee; it may also be privileged. If you receive this e-mail in 
> error, please notify the sender immediately and destroy it. As its integrity 
> cannot be secured on the Internet, Atos’ liability cannot be triggered for 
> the message content. Although the sender endeavours to maintain a computer 
> virus-free network, the sender does not warrant that this transmission is 
> virus-free and will not be liable for any damages resulting from any virus 
> transmitted. On all offers and agreements under which Atos Nederland B.V. 
> supplies goods and/or services of whatever nature, the Terms of Delivery from 
> Atos Nederland B.V. exclusively apply. The Terms of Delivery shall be 
> promptly submitted to you on your request.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zKVM installation problem

2015-10-05 Thread Grzegorz Powiedziuk
It’s a long shot but was the disk purged or you reused an old disk with some 
leftovers on it?
I’ve seen weird python errors similar to these ones (unfortunately I didn’t 
keep traces to compare) in some linux distribution during the installation if I 
had a “dirty disk”. 
And zKVM seems to be just another linux system. 
Redhat was really bad about this. It just didn’t like dirty disks even if it 
claimed that it will format them (especially minidisks which just overlapped 
older bigger minidisks in the past so just trash without a real partition 
table). 
In your trace, the error happens just one second after “formatting disks…” 
message. There is no way it would have enough time to finish format. 
So that’s why I popped into my mind. 
I would try to format it with ickdsf or cpfmtxa before running install, just in 
case. 

Gregory Powiedziuk




> On Oct 5, 2015, at 11:11 AM, Ray Mansell  wrote:
> 
> I'm trying to install zKVM in a virtual machine, but no matter what
> installation options I choose, I always get the following error:
> 
> 2015-10-03 14:39:40,900 - controller.controller - INFO - InstallProgress
> screen
> 2015-10-03 14:39:40,901 - controller.controller - INFO - Formatting disks...
> 2015-10-03 14:39:40,901 - controller.controller - INFO - Installing KVM
> for IBM z into disk dasda...
> 2015-10-03 14:39:41,041 - model.installfunctions - INFO - Get repodata_file
> 2015-10-03 14:39:41,088 - model.installfunctions - CRITICAL - Failed
> installSystem
> 2015-10-03 14:39:41,089 - model.installfunctions - CRITICAL -
> EXCEPTION:
> 2015-10-03 14:39:41,089 - model.installfunctions - CRITICAL - local
> variable 's' referenced before assignment
> 2015-10-03 14:39:41,089 - model.installfunctions - CRITICAL -
> Stacktrace:Traceback (most recent call last):
>  File "/opt/ibm/kvmibm-installer/model/installfunctions.py", line 357,
> in installSystem
>installPackages(rootDir, callback)
>  File "/opt/ibm/kvmibm-installer/model/installfunctions.py", line 749,
> in installPackages
>repodata_file = getRepodataFile(repo, logger)
>  File "/opt/ibm/kvmibm-installer/model/installfunctions.py", line 548,
> in getRepodataFile
>d = re.split('([\d\w]+-primary.sqlite.bz2)', s)
> UnboundLocalError: local variable 's' referenced before assignment
> 
> 2015-10-03 14:39:41,095 - controller.controller - CRITICAL - ZKVMError:
> [['KVMIBMIN70500', 'Error while installing packages.'], ('INSTALLER',
> 'INSTALLSYSTEM', 'INSTALL_MSG')]
> 
> Help? Please?
> 
> Ray...
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zKVM installation problem

2015-10-05 Thread Grzegorz Powiedziuk
It’s a long shot but was the disk purged or you reused an old disk with some 
leftovers on it?
I’ve seen weird python errors similar to these ones (unfortunately I didn’t 
keep traces to compare) in some linux distribution during the installation if I 
had a “dirty disk”. 
And zKVM seems to be just another linux system. 
Redhat was really bad about this. It just didn’t like dirty disks even if it 
claimed that it will format them (especially minidisks which just overlapped 
older bigger minidisks in the past so just trash without a real partition 
table). 
In your trace, the error happens just one second after “formatting disks…” 
message. There is no way it would have enough time to finish format. 
So that’s why I popped into my mind. 
I would try to format it with ickdsf or cpfmtxa before running install, just in 
case. 

Gregory Powiedziuk




> On Oct 5, 2015, at 11:11 AM, Ray Mansell  wrote:
> 
> I'm trying to install zKVM in a virtual machine, but no matter what
> installation options I choose, I always get the following error:
> 
> 2015-10-03 14:39:40,900 - controller.controller - INFO - InstallProgress
> screen
> 2015-10-03 14:39:40,901 - controller.controller - INFO - Formatting disks...
> 2015-10-03 14:39:40,901 - controller.controller - INFO - Installing KVM
> for IBM z into disk dasda...
> 2015-10-03 14:39:41,041 - model.installfunctions - INFO - Get repodata_file
> 2015-10-03 14:39:41,088 - model.installfunctions - CRITICAL - Failed
> installSystem
> 2015-10-03 14:39:41,089 - model.installfunctions - CRITICAL -
> EXCEPTION:
> 2015-10-03 14:39:41,089 - model.installfunctions - CRITICAL - local
> variable 's' referenced before assignment
> 2015-10-03 14:39:41,089 - model.installfunctions - CRITICAL -
> Stacktrace:Traceback (most recent call last):
> File "/opt/ibm/kvmibm-installer/model/installfunctions.py", line 357,
> in installSystem
>   installPackages(rootDir, callback)
> File "/opt/ibm/kvmibm-installer/model/installfunctions.py", line 749,
> in installPackages
>   repodata_file = getRepodataFile(repo, logger)
> File "/opt/ibm/kvmibm-installer/model/installfunctions.py", line 548,
> in getRepodataFile
>   d = re.split('([\d\w]+-primary.sqlite.bz2)', s)
> UnboundLocalError: local variable 's' referenced before assignment
> 
> 2015-10-03 14:39:41,095 - controller.controller - CRITICAL - ZKVMError:
> [['KVMIBMIN70500', 'Error while installing packages.'], ('INSTALLER',
> 'INSTALLSYSTEM', 'INSTALL_MSG')]
> 
> Help? Please?
> 
> Ray...
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES12 + EDEV + bug

2015-10-05 Thread Grzegorz Powiedziuk
> On Oct 3, 2015, at 1:08 PM, Grzegorz Powiedziuk <gpowiedz...@gmail.com> wrote:
> 
>> 
>> SUSE subscriptions are for a product line, not a particular version, unless 
>> that version is out of support.  So, if you have a valid subscription to any 
>> SUSE Linux Enterprise Server for System z, then you have a valid 
>> subscription for SLES12. Assuming the subscription was a "standard" or 
>> "priority" one, and not "basic" you're entitled to open support requests.  
>> Basic subscriptions come with support for installation, not subsequent 
>> problems.  But, if all you have is the trial version and nothing else, then 
>> yes, you're out of luck.  It will have to be pursued as an internally 
>> reported bug, which carries far lower priority. Unless someone else out 
>> there with a current standard or priority subscription runs into the same 
>> problem and reports it (hint), it could be slow going.
>> 
>> 
>> Mark Post
>> 
>> --
>> For LINUX-390 subscribe / signoff / archive access instructions,
>> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or 
>> visit
>> http://www.marist.edu/htbin/wlvindex?LINUX-390
>> --
>> For more information on Linux on System z, visit
>> http://wiki.linuxvm.org/
> 
> 
> done last night. After I figured exactly what is going on I've logged it and 
> reported it
> 
> Gregory Powiedziuk

Seems like it was acknowledged as a bug. I just got an email:

"I agree with you on allowing pvscan to scan non-partitioned devices for 
backwards compatibility.   I've opened Bug 948859  in your behalf.
Thanks for bringing it to our attention.”


Gregory Powiedziuk


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES12 + EDEV + bug

2015-10-03 Thread Grzegorz Powiedziuk
> 
> SUSE subscriptions are for a product line, not a particular version, unless 
> that version is out of support.  So, if you have a valid subscription to any 
> SUSE Linux Enterprise Server for System z, then you have a valid subscription 
> for SLES12.  Assuming the subscription was a "standard" or "priority" one, 
> and not "basic" you're entitled to open support requests.  Basic 
> subscriptions come with support for installation, not subsequent problems.  
> But, if all you have is the trial version and nothing else, then yes, you're 
> out of luck.  It will have to be pursued as an internally reported bug, which 
> carries far lower priority.  Unless someone else out there with a current 
> standard or priority subscription runs into the same problem and reports it 
> (hint), it could be slow going.
> 
> 
> Mark Post
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/


done last night. After I figured exactly what is going on I've logged it and 
reported it

Gregory Powiedziuk
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES12 + EDEV + bug

2015-10-02 Thread Grzegorz Powiedziuk
> On Oct 2, 2015, at 8:15 PM, Mark Post <mp...@suse.com> wrote:
> 
>>>> On 9/30/2015 at 10:59 PM, Grzegorz Powiedziuk <gpowiedz...@gmail.com> 
>>>> wrote: 
>> I think I might have found a small bug in latest update for SLES12  so this 
>> is just a FYI for everyone who made the same mistake I did. 
> -snip- 
>> Everything was working fine, until I did an update. Something has changed in 
>> the way LVM recognizes physical  devices and it totally brakes the whole 
>> system. It breaks all the parts where LVM is being called which includes the 
>> little initrd which is loaded by  zipl during the first stage of boot. 
> 
> I was able to reproduce this situation.  I did an install, with /home on a PV 
> using /dev/dasdb (an EDEV) and not /dev/dasdb1.  After updating to the 
> current maintenance level, pvscan doesn't show any PVs out there.  There were 
> a total of 259 updates that got applied.  I did the kernel first, since 
> that's an easy one to isolate.  It didn't cause the problem.  So, not sure 
> just yet what did.
> 
>> I did some debugging and I found that with latest update of SLES (I don*t 
>> know why because the LVM seems to be in the same version) lvm doesn*t like 
>> having metadata on *dasda* (fba) anymore. It likes it  only if its on dasda1 
>> When pvscan scans the disk it ends up with:
>>  /dev/dasda: Skipping: Partition table signature found 
>> so it does not find the label and it fails to bring online the pv and volume 
>> group. 
> 
> I haven't see that message at all.  I just get nothing back.

Mark, that message shows up only if you use pvscan or pvck with verbose. Which 
of course does not happen if you are just booting. To dig out that message I’ve 
linked that disk to another working updated version of SLES (installed on 
dasda1) and run the pvscan -vvv over there. 

But if you wait long enough during the boot, it should bring up dracut rescue 
console eventually (few minutes). And I believe that over there,  you can run 
“lvm pvscan /dev/dasda -vvv” over there and you should get that message as 
well. And that’s why it fails to boot. Dracut  (with lvm pvscan command) scans 
all devices so it cat put together volume group and it just skips this disk so 
it ends up with nothing. No root volume. 

> 
> You wouldn't believe how complex it is to try and figure out what block 
> devices are what, which to use, what ones have the "fake" partition tables 
> the DASD driver creates, what's virtual (VDISKs look just like 9336 devices 
> also), what not, etc.  Totally stinks.  Looking back, it would have been much 
> better in the long wrong for those imaginary partition tables to never have 
> been used, but here we are.
> 

> As Rick suggested, if you don't have a support request open with your support 
> provider, do so as soon as possible.  It's clear that something changed 
> between SLES12 GA and now, and we clearly don't have an automated QA test for 
> this situation, or whatever update caused it would never have made it out the 
> door.  Even if we wind up sticking with the current behavior, that's 
> something that will have to be managed carefully so that other customers 
> don't encounter problems.
> 

I will see if I can do it. I don’t have the SLES12 license - I’ve just 
downloaded the 60 days trial. So I am not sure If I even have option to open 
support requests. 

> Finally, thank you for the work you've done already.  Nice job on behalf of 
> the rest of the community, to say the least.
> 

Ahh no problem. I enjoy doing that kind of stuff. That’s not work for me ;) 

Gregory Powiedziuk
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES12 + EDEV + bug

2015-10-02 Thread Grzegorz Powiedziuk
> On Oct 2, 2015, at 10:38 PM, Grzegorz Powiedziuk <gpowiedz...@gmail.com> 
> wrote:
> 
>> 
>> On Oct 2, 2015, at 8:15 PM, Mark Post <mp...@suse.com> wrote:
>> 
>>>>> On 9/30/2015 at 10:59 PM, Grzegorz Powiedziuk <gpowiedz...@gmail.com> 
>>>>> wrote: 
>>> I think I might have found a small bug in latest update for SLES12  so this 
>>> is just a FYI for everyone who made the same mistake I did. 
>> -snip- 
>>> Everything was working fine, until I did an update. Something has changed 
>>> in 
>>> the way LVM recognizes physical  devices and it totally brakes the whole 
>>> system. It breaks all the parts where LVM is being called which includes 
>>> the 
>>> little initrd which is loaded by  zipl during the first stage of boot. 
>> 
>> I was able to reproduce this situation.  I did an install, with /home on a 
>> PV using /dev/dasdb (an EDEV) and not /dev/dasdb1. After updating to the 
>> current maintenance level, pvscan doesn't show any PVs out there.  There 
>> were a total of 259 updates that got applied.  I did the kernel first, since 
>> that's an easy one to isolate.  It didn't cause the problem.  So, not sure 
>> just yet what did.

I did more playing around and I found out that that LVM source I’ve downloaded 
is working fine out from box. These changes I made to the source code were not 
necessary. 
SLES version of LVM has to have some changes. 


So I’ve checked what was installed with latest update for LVM and here it is (I 
guess I should have done it on the first place) :

patch: SUSE-SLE-SERVER-12-2015-314
- LVM2 does not support unpartitioned DASD device which has special 

   │
 │  format in the first 2 tracks and will silently discard LVM2 label   

 │
 │  information written to it by pvcreate. Mark this type of device as  

 │
 │  unsupported. (bsc#894202)

and from suse.com
Skip unpartitioned DASD devices, they are not supported. (bsc#894202)

I guess that’s it. They wanted to fix things If I understand this correctly but 
this might put in danger some users with older versions. 

Thanks
Gregory Powiedziuk







--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES12 + EDEV + bug

2015-10-02 Thread Grzegorz Powiedziuk
> I did more playing around and I found out that that LVM source I’ve 
> downloaded is working fine out from box. These changes I made to the source 
> code were not necessary. 
> SLES version of LVM has to have some changes. 
> 
> 
> So I’ve checked what was installed with latest update for LVM and here it is 
> (I guess I should have done it on the first place) :
> 
> patch: SUSE-SLE-SERVER-12-2015-314
> - LVM2 does not support unpartitioned DASD device which has special   
>   
>│
>  │  format in the first 2 tracks and will silently discard LVM2 label 
>   
>  │
>  │  information written to it by pvcreate. Mark this type of device as
>   
>  │
>  │  unsupported. (bsc#894202)
> 
> and from suse.com 
> Skip unpartitioned DASD devices, they are not supported. (bsc#894202)
> 
> I guess that’s it. They wanted to fix things If I understand this correctly 
> but this might put in danger some users with older versions. 
> 
> Thanks
> Gregory Powiedziuk

They probably  wanted to change it only for pvcreate so it would be possible to 
create lvms only on partitioned disks. But doing this change in a shared 
function, they crippled also pvscan so using lvms already existing on a not 
partitioned devices became impossible. 

Gregory Powiedziuk



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SLES12 + EDEV + bug

2015-10-01 Thread Grzegorz Powiedziuk
Thank you Rick for your input.
Here are some more experiments

>
> For FBA (including EDEV and SAN), use 'fdisk'.
> (And forgive me for repeating some details that you already know.)
> In my experience, the partition logic sees a default partition even when
> one was not explicitly created.
> I found that I could get rid of this ghost partition by explicitly
> running 'fdisk' and writing an empty partition table. Then "dasda1" goes
> away.
>
>
>
I did an experiment and created new small EDEV devices and linked them to
two systems - original SLES12 (from DVD) and updated version of SLES12
In both cases running fdisk and saving the partition table did not get rid
of “dasdb1” device.
It’s as you said a ghost partition defined in fly by a dasd/fba driver.
Nothing is being written to a disk.
But creating a couple of dummy partitions and removing them afterwards does
leave some stuff on the disk (I've dumped it with dd and looked closely)
and it does get rid of that ghost partition! But a device is still not
usable by LVM in updated SLES12. It still claims that there is a partition
table signature found. The original SLES12 doesn't care! Lvm will be happy
to create labels on it.

At some point I created a label on dasdb1  and then on dasdb. Than I've
dumped first blocks of the dasdb and looked for ascii. I found both of the
labels, just 2 sectors from each other

So it seems like something has definitely changed and it is not possible to
use and edev without this ghost partition while it was possible in DVD
 (ver 12.0.20141010) version of SLES12.
Which may lead to big problems if someone who uses edev like this will do
an update.

Gregory



>
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


SLES12 + EDEV + bug

2015-09-30 Thread Grzegorz Powiedziuk
I think I might have found a small bug in latest update for SLES12  so this is 
just a FYI for everyone who made the same mistake I did. 

If you use edevices, you know that the FBA driver  in linux automagically (like 
Mark explained it to me few years ago ;) ) creates a device “1”  (dasda1 for 
example) on edevice. 
You end up with for example dasda + dasda1  and you use dasda1 for OS 
(including LVM) . No fdisks, no fdasds. 

I haven’t used edevices in a while so I forgot about that and my recent SLES12 
install I did on “dasda” (fba) , instead of  “dasda1”. I don’t know how I did 
that but I did it and SLES install wizard didn’t complain. System installed 
fine and it is was working ok.

pvscan on a build like this returned:
 Volume Groups with the clustered attribute will be inaccessible.
  PV /dev/dasda   VG root   lvm2 [19.53 GiB / 0free]

while in proper installation it looks like this:

PV /dev/dasda1   VG lnx15   lvm2 [19.53 GiB / 0free]
  Total: 1 [19.53 GiB] / in use: 1 [19.53 GiB] / in no VG: 0 [0   ]


Everything was working fine, until I did an update. Something has changed in 
the way LVM recognizes physical  devices and it totally brakes the whole 
system. It breaks all the parts where LVM is being called which includes the 
little initrd which is loaded by  zipl during the first stage of boot. 

I did some debugging and I found that with latest update of SLES (I don’t know 
why because the LVM seems to be in the same version) lvm doesn’t like having 
metadata on “dasda” (fba) anymore. It likes it  only if its on dasda1 
When pvscan scans the disk it ends up with:
  /dev/dasda: Skipping: Partition table signature found 
so it does not find the label and it fails to bring online the pv and volume 
group. 

This edevice, if linked to an original SLES12 or any RHEL is working fine. LVM 
finds a label on /dev/dasdb (fba) and I can mount it without a problem. 

I didn’t find anything different in lvm.conf which could cause this. 

To fix this (well it’s rather a dirty hack), I’ve downloaded a sourcecode of 
LVM, found the instruction where it exits on message "Skipping: Partition table 
signature found” , commented out that section, compiled, installed this lvm, 
rebuilt initrds (both of them) and it worked. I got my system running again. 

So the lesson is to make sure that SLES is being installed on dasda1 (fba) not 
dasda   (only if you have edevices - with standard ECKDs  it’s probably ok to 
do that) 

If you already have it on dasda  (FBA) - don’t update it without preparations. 

Perhaps SLES shouldn’t allow to install a system this way at all?

Gregory Powiedziuk

 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Documentation for Linux on z Systems and KVM - new

2015-09-26 Thread Grzegorz Powiedziuk
Hi, I am not sure if I understand the offer for KVM from IBM. 
I know that it is available as a preview in SLES12

But what is:
- 5648-KVM KVM for IBM z Systems V1.1
As far as I can tell, it is something that can be ordered from IBM catalog. But 
what is it? Did IBM came out with their own linux distro with KVM preinstalled 
on it (is it kind of like xcat?). Does it come on DVD or can be downloaded? 
How much  different is it from KVM on SLES? 
Thank you
Gregory



> On Sep 22, 2015, at 8:22 AM, Dorothea Matthaeus  wrote:
> 
> Check out the new Linux on z Systems base technology publications on the
> IBM Knowledge Center and on developerWorks:
> 
> KVM Virtual Server Quick Start
> KVM Virtual Server Management
> Device Drivers, Features, and Commands for Linux as a KVM Guest
> Installing SUSE Linux Enterprise Server 12 as a KVM Guest
> 
> See:
> 
> 
> http://www.ibm.com/support/knowledgecenter/linuxonibm/liaaf/lnz_r_kvm_base.html
> 
> 
> http://www.ibm.com/developerworks/linux/linux390/documentation_dev.html
> 
> 
> Dorothea Matthaeus
> Linux on z Systems Information Development
> IBM Deutschland Research and Development GmbH
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Documentation for Linux on z Systems and KVM - new

2015-09-23 Thread Grzegorz Powiedziuk
BTW, I was playing with KVM few days ago and it looks pretty awesome in terms 
of maintaining the environment and deploying new VMs but the performance for me 
was really bad.
And I mean extremely bad. I am not sure if it was because I made the KVM host 
(sles12) run as a virtual machine in z/VM or I was  doing something else wrong. 
I know that having kvm virtual machines in a 3rd level (under sles -> under 
z/VM) will impact performance but my case it was extremly bad. It was like 
running linux in hercules s390 in 2006 on old x86 desktop. 

The installation of linux in kvm virtual machine took 3-4 hours. Every 
operation that involves cpu and memory takes 3-10 time more time than on a KVM 
host itself. 
Whenever something is happening in kvm virtual machine, the performance toolkit 
shows that KVM host is doing about 50% in supervisor mode and 50% in emulation 
mode which makes the t/v ratio for this machine about 2 which is pretty bad. I 
didn’t have time to do more investigation on this yet.
The KVM host (Server) sees about 50% cpu time as a “steal time”. 
 
Anyone else played with it? 
Thanks
Gregory Powiedziuk



> On Sep 22, 2015, at 8:22 AM, Dorothea Matthaeus  wrote:
> 
> Check out the new Linux on z Systems base technology publications on the
> IBM Knowledge Center and on developerWorks:
> 
> KVM Virtual Server Quick Start
> KVM Virtual Server Management
> Device Drivers, Features, and Commands for Linux as a KVM Guest
> Installing SUSE Linux Enterprise Server 12 as a KVM Guest
> 
> See:
> 
> 
> http://www.ibm.com/support/knowledgecenter/linuxonibm/liaaf/lnz_r_kvm_base.html
> 
> 
> http://www.ibm.com/developerworks/linux/linux390/documentation_dev.html
> 
> 
> Dorothea Matthaeus
> Linux on z Systems Information Development
> IBM Deutschland Research and Development GmbH
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Documentation for Linux on z Systems and KVM - new

2015-09-23 Thread Grzegorz Powiedziuk
Ok, thanks for the explanation. I knew about missing SIE from one of the
SHARE presentations but I didn't know that the performance downgrade will
be so huge. I will try to get a free LPAR just for KVM at some point but
for now I will keep learning and exploring its features in z/vm.
So far I really like it. Installation was a breeze ... besides the fact
that I needed to get and install demo of SLES because Red Hat is again was
one step behind. As long as KVM can get close to z/vm performance then I
see a great potential in it.

Gregory P


2015-09-23 15:43 GMT-04:00 Neale Ferguson <ne...@sinenomine.net>:

> KVM under z/VM will suck because the hardware only supports two levels of
> “SIE”. SIE is whats used to allow an LPAR and a virtual machine to operate
> at hardware speeds. A lot of the stuff that used to be done by VM/SP and
> predecessors when running virtual machines is done by the hardware (well
> the microcode/millicode/…).
>
> A virtual machine that tries to dispatch a guest of its own on SIE (like
> KVM running on z/VM) has to get all those operations performed by the
> hypervisor and not the hardware. Thus you get an enormous overhead and the
> performance you are experiencing.
>
> On 9/23/15, 3:32 PM, "Linux on 390 Port on behalf of Grzegorz Powiedziuk"
> <LINUX-390@VM.MARIST.EDU on behalf of gpowiedz...@gmail.com> wrote:
>
> >BTW, I was playing with KVM few days ago and it looks pretty awesome in
> >terms of maintaining the environment and deploying new VMs but the
> >performance for me was really bad.
> >And I mean extremely bad. I am not sure if it was because I made the KVM
> >host (sles12) run as a virtual machine in z/VM or I was  doing something
> >else wrong. I know that having kvm virtual machines in a 3rd level (under
> >sles -> under z/VM) will impact performance but my case it was extremly
> >bad. It was like running linux in hercules s390 in 2006 on old x86
> >desktop.
> >
> >The installation of linux in kvm virtual machine took 3-4 hours. Every
> >operation that involves cpu and memory takes 3-10 time more time than on
> >a KVM host itself.
> >Whenever something is happening in kvm virtual machine, the performance
> >toolkit shows that KVM host is doing about 50% in supervisor mode and 50%
> >in emulation mode which makes the t/v ratio for this machine about 2
> >which is pretty bad. I didn’t have time to do more investigation on this
> >yet.
> >The KVM host (Server) sees about 50% cpu time as a “steal time”.
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Documentation for Linux on z Systems and KVM - new

2015-09-23 Thread Grzegorz Powiedziuk
That’s very interesting. I wasn’t aware of this “stress” tool. So I’ve 
downloaded it and run a couple tests with it.
If I run basic —cpu 1 test (-n according to help is a dry run), the KVM server 
spins the CPU 100% in user time. So no stealing at all. 

Could you run a couple of tests like this (I am providing my own results):

KVM server (2 CPU but it is one threaded task so doesn’t matter how many)

# time for i in {1..500}; do  dd if=/dev/zero of=/dev/shm/test bs=1M count=10 
;echo interation $i done; done
….
10485760 bytes (10 MB) copied, 0.00273469 s, 3.8 GB/s
interation 500 done

real0m2.223s
user0m0.171s
sys 0m2.002s 

During the test (I’ve changed 1..500 to 1..5000 to have more time to catch top 
output) top was showing on average:
%Cpu1  :  7.0 us, 83.7 sy,  0.0 ni,  7.0 id,  0.0 wa,  0.0 hi,  0.0 si,  2.3 st


KVM virtual machine(1 CPU adding CPUs will not make difference in this case):
# time for i in {1..500}; do  dd if=/dev/zero of=/dev/shm/test bs=1M count=10 
;echo interation $i done; done
10485760 bytes (10 MB) copied, 0.0524781 s, 200 MB/s
interation 500 done

real0m38.556s
user0m1.180s
sys 0m11.550s
During the test (no need to do more than 1..500 becasue there is enough time to 
check top) the KVM host was showing:
%Cpu0  : 53.7 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.7 hi,  0.0 si, 45.6 st
And KVM virtual machine itself was showing
%Cpu(s):  3.8 us, 30.1 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si, 66.2 st

Now pure CPU stress tests

KVM Host:
# time for i in {1..10}; do  dd if=/dev/urandom of=/dev/null bs=1M count=10 
;echo interation $i done; done
0485760 bytes (10 MB) copied, 0.630299 s, 16.6 MB/s
interation 10 done

real0m6.294s
user0m0.004s
sys 0m6.273s

top output on KVM host
%Cpu1  :  0.0 us,100.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st


KVM virtual machine:
# # time for i in {1..10}; do  dd if=/dev/urandom of=/dev/null bs=1M count=10 
;echo interation $i done; done
10485760 bytes (10 MB) copied, 0.64323 s, 16.3 MB/s
interation 10 done

real0m6.779s
user0m0.014s
sys 0m6.345s
top output on KVM host
%Cpu1  : 96.0 us,  0.0 sy,  0.0 ni,  0.0 id,  0.0 wa,  0.0 hi,  0.0 si,  4.0 st
top output on KVM virtual machine
0.6 us, 85.4 sy,  0.0 ni,  7.9 id,  0.0 wa,  0.0 hi,  0.0 si,  6.0 st


So it seems that when it is about only CPU the difference is very small or 
none. I am getting similar results if I do complicated equations with big 
numbers (virtual machines solves it almost in the same time as 1st level host). 

But when memory is involved, everything slows down drastically. I wonder what 
results you will get from the dd from /dev/zero to /dev/shm. 

And no, my system has plenty of memory, paging in z/VM is ZERO. Hardly anything 
runs on this LPAR. 
KVM host has plenty of real memory - 8G and the virtual machine is set to 4G. 
It still has few Gigs left. No swapping, nothing else runs here. 

Another huge difference in times is …. starting yast. On 1st level host it 
takes fractures of a second. On the KVM virtual machine it takes >1min

Best regards
Gregory P








> On Sep 23, 2015, at 6:42 PM, Viktor Mihajlovski <mihaj...@linux.vnet.ibm.com> 
> wrote:
> 
> On 23.09.2015 15:32, Grzegorz Powiedziuk wrote:
>> BTW, I was playing with KVM few days ago and it looks pretty awesome in 
>> terms of maintaining the environment and deploying new VMs but the 
>> performance for me was really bad.
>> And I mean extremely bad. I am not sure if it was because I made the KVM 
>> host (sles12) run as a virtual machine in z/VM or I was  doing something 
>> else wrong. I know that having kvm virtual machines in a 3rd level (under 
>> sles -> under z/VM) will impact performance but my case it was extremly bad. 
>> It was like running linux in hercules s390 in 2006 on old x86 desktop.
>> 
>> The installation of linux in kvm virtual machine took 3-4 hours. Every 
>> operation that involves cpu and memory takes 3-10 time more time than on a 
>> KVM host itself.
>> Whenever something is happening in kvm virtual machine, the performance 
>> toolkit shows that KVM host is doing about 50% in supervisor mode and 50% in 
>> emulation mode which makes the t/v ratio for this machine about 2 which is 
>> pretty bad. I didn’t have time to do more investigation on this yet.
>> The KVM host (Server) sees about 50% cpu time as a “steal time”.
>> 
> 
> Although KVM should generally be run in LPAR a slowdown in an order of 
> magnitude in z/VM seems a bit odd.
> I actually do run KVM under z/VM at my own risk (I am absolutely NOT 
> suggesting to do that). To recreate your problems I started a CPU burning 
> task in a KVM guest (stress -n 1) and see the following in the host:
> 
> Tasks: 162 total,   1 running, 161 sleeping,   0 stopped,   0 zombie
> %Cpu(s): 32,5 us,  0,7 sy,  0,0 ni,

Re: Having issues attaching to LUNs after Brocade mode change

2015-09-21 Thread Grzegorz Powiedziuk
on this list, there are many many different target WWPNS. Is it possible that 
these are what your linux guests can actually see at the moment? 
If that’s the case then your zonning does allow to see all of these despite of 
what you have said. 
Otherwise where linux would get these wwwpns from? 
Gregory Powiedziuk



> On Sep 21, 2015, at 11:16 AM, Will, Chris  wrote:
> 
> We recently had a Brocade mode change and when the channel came back we were 
> flooded with the following messages and have not been able to attach any LUNs 
> to this channel (they attach and then timeout).  Our zoning should only allow 
> us to see/discover one port.
> 
> Sep 19 20:58:45 edidb2prd1 kernel: zfcp.ac341f: 0.0.0404: The local link has 
> been restored
> Sep 19 20:58:45 edidb2prd1 kernel: qdio: 0.0.0404 ZFCP on SC 0 using AI:1 
> QEBSM:1 PRI:1 TDD:1 SIGA: W A
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0x5000144280501200
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0x50001442903b0910
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0x50001442804ecf00
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031005b41
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e0310014e8
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e0310014dc
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031001460
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e0310014a4
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e0310014ec
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031001454
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031001408
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031001344
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031001404
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031001348
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031001420
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031001458
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031005701
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0x5973001411b4
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0x5000144280501200
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0x50001442903b0910
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0x50001442804ecf00
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0x50001442903c5210
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031005b41
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e0310014e8
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e0310014dc
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031001460
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e0310014a4
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e0310014ec
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031001454
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 
> adapter resources to open remote port 0xc05076e031001408
> Sep 19 20:59:00 edidb2prd1 kernel: zfcp.9b70c0: 0.0.0404: Not enough FCP 

Re: zFCP and NPIV - limit of 32

2015-08-20 Thread Grzegorz Powiedziuk
 
 On Aug 20, 2015, at 9:23 AM, Alan Altmark alan_altm...@us.ibm.com wrote:
 
 The constant in this discussion is that FCP subchannels (device numbers) 
 with the same EQID are defined to have the same access rights, without 
 regard to the chpid.  This means they are in the same SAN zone and are 
 masked to the same LUNs.  If you are using NPIV, then that is done per 
 subchannel (zoned with the virtual WWPNs).  If you're not using NPIV, then 
 all subchannels on the chpid have the same access rights (zoned with the 
 physical WWPN or at the port level).
 
 Keep an eye out for enhancements that will make pre-zoning much easier, 
 without the need for a prediction tool.
 

Will do! 

 No, it's the correct thing.  If every NPIV-enabled subchannel is zoned 
 separately, then every subchannel will have a unique EQID.
 
 For simplicity:
 Member A
  CHPID 60   6000-61A3 (that's the 420 device limit using 2 CUs)
  CHPID 70   7000-71A3
 
 Member B
  CHPID 60   6000-61A3
  CHPID 70   7000-71A3
 
 Virtual machine LINUX1
   DEDICATE 6000 6101  (x101 has been assigned to this guest)
   DEDICATE 7000 7101
   Multipath config includes all target WWPNs and LUNs.
 
 Virtual machine LINUX2
   DEDICATE 6000 6102  (x102 has been assigned to this guest)
   DEDICATE 7000 7102
   Multipath config includes all target WWPNs and LUNs.
 
 Each unique tuple (610x,710x) has been zoned and masked identically and 
 SYSTEM CONFIG has assigned each tuple the same EQID on all members.   As 
 you relocate back and forth, virtual 6000 and 7000 will be on different 
 chpids.  But the beauty of it is that you don't care which chpid.  The 
 only difference it makes is that the local WWPN will be different.  But 
 since the two WWPNs have the same access rights, no one cares.
 
 In a simple configuration, you could get away with using the user ID as 
 the EQID.
 
 Please note that the above illustrate why we suggest that you use the same 
 device numbers in your I/O configurations.  If you don't, then the COMMAND 
 ATTACH EQID will be needed.

That’s exactly what I did. Same addresses on all LPARS and on every LPAR, 60XX 
always goes to fabric A and 61XX always go to fabric B   (switch A and switch 
B) 
So it’s quite easy to figure it out.

Thank you one more time. I definitely have learned something here! 
I think I know what to do now. 
Gregory 

 
 This all works because relocation causes Linux to re-establish its 
 connections to the LUNs.
 
 Alan Altmark
 
 Senior Managing z/VM and Linux Consultant
 Lab Services System z Delivery Practice
 IBM Systems  Technology Group
 ibm.com/systems/services/labservices
 office: 607.429.3323
 mobile; 607.321.7556
 alan_altm...@us.ibm.com
 IBM Endicott
 
 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zFCP and NPIV - limit of 32

2015-08-20 Thread Grzegorz Powiedziuk
 On Aug 19, 2015, at 11:18 PM, Cohen, Sam sam.co...@lrs.com wrote:
 
 Gregory,
 
 It is, of course, more art than science.  In my case, I've done the following:
 
 My SYSTEM CONFIG file has EQID FCP6000 for addresses 6000 and 6100, EQID 
 FCP6001 for addresses 6001 and 6101, etc.  The SAN team has zoned the virtual 
 WWPNs for 6000 and 6100 in both LPARs to the same target WWPN/LUN, as well as 
 having zoned the virtual WWPNs for 6001 and 6101 in both LPARs to the same 
 target WWPN/LUN (but a different target than that of the 6000/6100 pair).  
 For my first Linux image, I include in the directory entry:
 
 COMMAND ATTACH EQID FCP6000 TO * AS 6000
 COMMAND ATTACH EQID FCP6000 TO * AS 6100
 
 My second Linux image uses EQID FCP6001 against 6001 and 6101 in both LPARs 
 and the directory entry includes:
 
 COMMAND ATTACH EQID FCP6001 TO * AS 6000
 COMMAND ATTACH EQID FCP6001 TO * AS 6100
 
 It's a bit manually intensive as I have to keep track of which pair of 
 addresses belongs to which guest, and the multipath.conf has to be modified 
 for the correct target SCSI disk (since I use the multipath daemon to balance 
 the traffic).  If I had a SAN Virtualization Controller (SVC), I wouldn't 
 need to modify the multipath.conf, since I would present the same WWPN/LUN 
 target to each guest and the SVC would direct the I/O to the correct physical 
 target.
 
 Thanks,
 
 
 Sam Cohen
 Levi, Ray  Shoup, Inc.
 
 

Thank you very much for your input Sam!
It helped to get  a better picture. 
Gregory

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zFCP and NPIV - limit of 32

2015-08-19 Thread Grzegorz Powiedziuk
 On Aug 18, 2015, at 12:00 PM, Alan Altmark alan_altm...@us.ibm.com wrote:
 
 On Tuesday, 08/18/2015 at 08:50 EDT, Grzegorz Powiedziuk 
 gpowiedz...@gmail.com wrote:
 
 Got it! Thank you so much for spending time on explanation. That 
 definitely
 helps.
 
 You're welcome.
 
 The bottom line is - I can ?over provision? (or kind of thin provision) 
 fcp
 channels as long as I keep below 32 attached paths per chpid and make 
 sure that
 I do EQIDs vertically only - separate EQID for every set of paths on two
 chpids  (one on each system).
 
 That's what I would do.

I’ve been thinking about this and I am afraid that doing one EQID for all 
devices on one CHPID will not work for us. 
The point is to predict WWPNs which would be used if machine is being relocated 
so the storage team can add these to zone configuration beforehand. 

If the LGR process picks the first eligible device from the list with the same 
equid instead off the one with the same address then relocation with cause 
linux to lose paths. 
I’ve always was doing equids for every pair of fpc devices instead one eqid per 
chpid so I didn’t even think about such situation before . That probably is not 
a best practice but what else can I do?
Is  doing a separate equids for every virtual machine on specific fcp devices a 
bad thing? 
For example 

EQID “Linux1z” on
6000 on system A 
6000 on system B  

EQID “Linux1y” on
6100 on system A
6100 on system B

Two EQID to avoid possible reverse of order which will confuse linux. I know it 
would eventually recover but there would be a delay before it figures out that 
on a specific channel it sees a different set of target (storage) wwpns after 
the relocation. 


Gregory 

 
 Then I would open an RFE to ask for CP to handle this automatically.  It 
 may be as simple as CP noting the chpid distribution on the source side 
 and attempting to match that on the target.  I.e. If guest #1 was using a 
 PURPLE FCP on chpid X and guest #2 was using a PURPLE FCP on chpid Y, then 
 CP could map them to PURPLE FCP subchannels on chpids X' and Y' on the 
 target (if available).  Further, remember the mapping and continue to use 
 it.  That way the guests will end up with the same subchannels-per-chpid 
 distribution on the target.
 
 This technique also applies to OSA and HiperSockets, where separation is 
 enforced (only) at the chpid boundary and you have explicitly chosen 
 devices based on their chpid association.  You don't want guests that were 
 communicating on the same chpid on system A to suddenly be on separate 
 chpids on System B.
 
 Alan Altmark
 
 Senior Managing z/VM and Linux Consultant
 Lab Services System z Delivery Practice
 IBM Systems  Technology Group
 ibm.com/systems/services/labservices
 office: 607.429.3323
 mobile; 607.321.7556
 alan_altm...@us.ibm.com
 IBM Endicott
 
 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zFCP and NPIV - limit of 32

2015-08-18 Thread Grzegorz Powiedziuk
 On Aug 17, 2015, at 4:57 PM, Alan Altmark alan_altm...@us.ibm.com wrote:
 
 On Monday, 08/17/2015 at 03:04 EDT, Grzegorz Powiedziuk 
 gpowiedz...@gmail.com wrote:
 On Aug 17, 2015, at 11:10 AM, Alan Altmark alan_altm...@us.ibm.com 
 wrote:
 
 The limit of 32 is on the number of active NPIV subchannels, not the
 number of active connections being used by a given NPIV sub channel.
 
 active NPIV subchannels - what exactly does it mean? If NPIV device just 
 sits
 there in z/VM as ?FREE?  not attached to anything, is it considered to 
 be
 active NPIV sub channel? I guess it is but I want to make it clear.
 If yes than I shouldn?t be doing what've asked about and I have my 
 answer.
 
 An active NPIV subchannel is one that is attached to a guest and is being 
 used, or one that is part of an EDEVICE.

Sweet! 

 
 I thought that EQUID is there so the z/vm knows which FCP device it 
 should pick
 from a target system when machine is being LGR-ed. I am not sure if I
 understand what you meant with overloading or not overloading. I mean 
 without
 EQUID it wouldn?t even LGR a virtual machine. Can you elaborate that?
 
 The EQID allows CP to select an available equivalent device from the 
 target system, where equivalent is defined to mean with the same EQID 
 and device type”.
 
 Let's assume that you have
 a) Ten (10) FCP paths on system A and B, and each path has 100 
 NPIV-enabled subchannels defined on it, for a total of 1000 NPIV-enabled 
 subchannels.
 b) 100 guests, each using two NPIV-enabled FCP subchannels, for a total of 
 200 active subchannels.
 c) You are using seven of your ten FCP paths since you are placing no more 
 than 32 active subchannels on each path.
 
 With me so far?
 
yes! 

 Now, if you use EQID PURPLE on those 200 subchannels (on both A and B), 
 you will discover that each path on B will fill to its capacity (100) 
 before CP moves onto the next.  That means that if you relocated all 100 
 guests, CP will be using exactly two paths on the target system.
 
 This behavior is a by-product of the fact that devices with the same EQID 
 are consumed in device address order.  No load balancing is performed.
 

Oh I see now. 

 Oops.
 
 So to address this, make sure that the EQID assigned to the subchannels on 
 an FCP path is unique to that path.  This will keep all the PURPLE 
 subchannels on a single chpid, all the RED ones on another, and all the 
 BLUE ones on a third.  (You might consider using an EQID like FCPnnn, 
 where nnn is a sequence number that you assign as you consume FCP paths.
 

Got it! Thank you so much for spending time on explanation. That definitely 
helps. 
The bottom line is - I can “over provision” (or kind of thin provision) fcp 
channels as long as I keep below 32 attached paths per chpid and make sure that 
I do EQIDs vertically only - separate EQID for every set of paths on two chpids 
 (one on each system). 
 
Best regards
Gregory Powiedziuk 


 Alan Altmark
 
 Senior Managing z/VM and Linux Consultant
 Lab Services System z Delivery Practice
 IBM Systems  Technology Group
 ibm.com/systems/services/labservices
 office: 607.429.3323
 mobile; 607.321.7556
 alan_altm...@us.ibm.com
 IBM Endicott
 
 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


zFCP and NPIV - limit of 32

2015-08-17 Thread Grzegorz Powiedziuk
Hi! 

I have been told a long time ago that there was a limit of number of wwpns one 
may/should have per FCP channel. 
And it was “32”.

Many times I’ve tried to find more information on this and what I found is: 

- it’s not a hard limit rather a rule
- it’s not mainframe's thing but rather fabric's, where switches don’t have 
enough fast cpus to serve more than 32 nodes on one port
- it will work with more than 32 but when channel goes down (for example for 
service)  and then it comes back and all these nodes will try register to the 
fabric at the same time, things go bad 

If that’s is the case ….. why power7 and power8 officially can have up to 64 
nodes per channel!?   (That’s what I was told by AIX admins) 

But anyway, here is my real question. 

Can I have more (many more) wwpns and devices per FCP channel but use only up 
to 32 and I will be fine.
It seems to be all good if my findings were true. I mean, switch doesn’t even 
know about a specific wwpn if it doesn’t initiate a connection. 
In other words, it is a limit/rule  of 32 active nodes or defined wwpns. 

The point is to have wwpns “reserved” for live guest relocation without wasting 
them if not used.

For example let’s assume we have two not shared FCP channels. One for each LPAR 
and:

LPAR A 
32 virtual machines using 32 FCP devices 
32 FREE FCP devices reserved for LGR from LPAR B

LPAR B
32 virtual machines using 32 FCP devices 
32 FREE FCP devices reserved for LGR from LPAR A

I just want to give storage people all possible wwpns a specific host will be 
using and might use if ever will be relocated. 
In the same time, I don’t want to waste  wwpns from precious '32' 
Of course I know about all the requirements for LGR like device addresses, 
fabric zones etc. 

I hope this makes sense. 
Thanks for any input!
 
Gregory Powiedziuk 


 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zFCP and NPIV - limit of 32

2015-08-17 Thread Grzegorz Powiedziuk
 On Aug 17, 2015, at 11:10 AM, Alan Altmark alan_altm...@us.ibm.com wrote:
 
 On Monday, 08/17/2015 at 10:33 EDT, Grzegorz Powiedziuk 
 gpowiedz...@gmail.com wrote:
 If that?s is the case ?.. why power7 and power8 officially can have up 
 to 64
 nodes per channel!?   (That?s what I was told by AIX admins)
 
 The number is now 64 on the z13 with the FICON Express16s cards.  The 
 following will be appearing in an upcoming edition of the IOCP book:
 

Bummer .. we run zec12 :( so I guess it is 32. 

 
 Can I have more (many more) wwpns and devices per FCP channel but use 
 only up
 to 32 and I will be fine.
 
 The limit of 32 is on the number of active NPIV subchannels, not the 
 number of active connections being used by a given NPIV sub channel.

active NPIV subchannels - what exactly does it mean? If NPIV device just sits 
there in z/VM as “FREE”  not attached to anything, is it considered to be 
active NPIV sub channel? I guess it is but I want to make it clear. 
If yes than I shouldn’t be doing what've asked about and I have my answer. 


  From 
 an LGR perspective, you will want to set a unique EQID for a matched set 
 of FCP pchids on system A and B.   This will ensure that LGR won't 
 overload any one FCP path.
 

I thought that EQUID is there so the z/vm knows which FCP device it should pick 
from a target system when machine is being LGR-ed. I am not sure if I 
understand what you meant with overloading or not overloading. I mean without 
EQUID it wouldn’t even LGR a virtual machine. Can you elaborate that? 


 The 32/64 limit is a soft limit, an IBM recommendation.  That is, it's up 
 to you how far you want to push the envelope with respect to error 
 recovery.
 

Thanks Alan! I will definitely keep it safe. 
Gregory 

 Alan Altmark
 
 Senior Managing z/VM and Linux Consultant
 Lab Services System z Delivery Practice
 IBM Systems  Technology Group
 ibm.com/systems/services/labservices
 office: 607.429.3323
 mobile; 607.321.7556
 alan_altm...@us.ibm.com
 IBM Endicott
 
 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Adding DASD to a Debian guest

2015-08-12 Thread Grzegorz Powiedziuk
At which point are you getting that error? 
If you do a fresh install then just add these dasds your virtual machine before 
initializing install process (attach or better define minidisks) and let the 
installator do the job. 
Installator should give you an option to add all your dasds into the lvm and 
partition it the way you want. If you do it this way, you shouldn’t have to go 
through all these steps we were talking about. These steps are only appropriate 
if you want to resize existing system. 
Or that’s exactly what you are doing but you are still getting an error? 
Gregory

 On Aug 11, 2015, at 9:30 PM, Howard V. Hardiman hvhar...@ncat.edu wrote:
 
 Okay.. Your help and advice is appreciated.  I think I see the picture better 
 now.  As a result, I am now starting a fresh install to do so with LVM.  I 
 can create vg and lv's just fine (I think, yet to test completely).  But I 
 get an error that says that zipl bootloader could not be downloaded onto the 
 device before the installation finishes.  I thought that by creating a /boot 
 partition (100M) on a piece of the dasd not affected by LVM would do the 
 trick.  But I get the same error Am I missing something here?
 
 If I proceed in the installation it says that I can manually boot with the 
 /vmlinuz kernel on partition /dev/dasda1 and root /dev/mapper/vg1-lv1 passed 
 as kernel argument.  How do I do that?  If it works, can I then load zipl or 
 some bootloader that will allow me to be able to ipl the OS like normal?
 
 HH
 -Original Message-
 From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Stephen 
 Powell
 Sent: Tuesday, August 11, 2015 8:39 PM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: Adding DASD to a Debian guest
 
 On Tue, 11 Aug 2015 12:27:08 -0400 (EDT), Grzegorz Powiedziuk wrote:
 
 Make sure that when you restart linux, these dasd will automatically
 show up in /proc/dasd/devices.  Stephen suggested over here creating
 these empty files in /etc/sysconfig/hardware - I don’t know about that.
 I have never done it this way (but I haven’t been using debian in many
 years and things might have changed).
 
 Debian uses sysconfig-hardware to configure the hardware and bring it online 
 at boot time.  Other distributions, SUSE in particular, used to use 
 sysconfig-hardware but don't anymore.  But Debian still does.
 Creating the empty file in /etc/sysconfig/hardware is all that is necessary 
 for a DASD device.  For other devices, a network device for example, the file 
 needs to have configuration data in it.  If you're using a plain vanilla 
 Debian system, rebuilding the initial RAM file system after creating a file 
 in /etc/sysconfig/hardware is not necessary.
 But if you have reconfigured things the way I do it, so that DASD is brought 
 online earlier (as I describe in 
 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=621080), then rebuilding 
 the initial RAM file system is necessary.  But it never hurts to rebuild the 
 initial RAM file system.
 
 This should go without saying, but I'll say it anyway: the way I do it is not 
 supported by Debian!
 
 I also need to offer the disclaimer that I have never used LVM2 on Debian.
 It's not that I have anything against it: I've just never needed to.
 
 --
  .''`. Stephen Powellzlinux...@wowway.com
 : :'  :
 `. `'`
   `-
 
 --
 For LINUX-390 subscribe / signoff / archive access instructions, send email 
 to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit http://wiki.linuxvm.org/
 NOTICE: This e-mail correspondence is subject to Public Records Law and may 
 be disclosed to third parties. ––
 NOTICE: This e-mail correspondence is subject to Public Records Law and may 
 be disclosed to third parties. ––

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Adding DASD to a Debian guest

2015-08-11 Thread Grzegorz Powiedziuk
It should be possible depending on what you did so far. 
If your “/“ is on LVM then you should be able to add new dasd to it’s volume 
group and extend the logical volume where “/“ lives. 

Make sure that when you restart linux, these dasd will automatically show up in 
/proc/dasd/devices
Stephen suggested over here creating these empty files in 
/etc/sysconfig/hardware  - I don’t know about that. I have never done it this 
way (but I haven’t been using debian in many years and things might have 
changed). As far as I remember, adding disks to zipl.conf and running zipl 
command was sufficient. But I googled it and it seems like that is something 
that came out with “wheeze debian” you might want to follow that than. 

  cd /etc/sysconfig/hardware
   touch config-ccw-0.0.   (0.0.0201 for example) 
At this point it would be good to rebuild the initramfs
   update-initramfs -uk $(uname -r)
Reboot and make sure new dasd are there  (cat /proc/dasd/devices or lsdasd) 

Create new partition on every new disk 
fdasd /dev/dasdc for example. And then “n” for new and follow instruction to 
create a partition using all space on a device. 
Now you should be able to create new physical volumes out of partitions you’ve 
just created. 

pvcreate /dev/dasdc1 

run pvscan to see if new pv is on the list 

Now you can extend the volume group. 
Run vgdisplay to see what is the name of your current VG and then 

vgextend NAME_of_vg /dev/dasdc1- this will add physical volume”  dasdc1 on 
top of your current vg 

Now you should be able to extend the size of your root logical volume. 

Run lvdisplay to see what is the name of your root logical volume and then

lvextend NAME_of_root_logical_volume /dev/dasdc1- this will add free space 
from dasdc1 on top of your root logical volume

Now you should be able to extend size of your ext filesystem

resize2fs NAME_of_root_logical_volume  

Repeat steps for every new dasd

That should do it. In sles I was able to run resize2fs on a mounted root 
filesystem, hopefully debian will be happy to do that as well. 


Gregory Powiedziuk


 On Aug 10, 2015, at 8:07 PM, Howard V. Hardiman hvhar...@ncat.edu wrote:
 
 Hello,
 
 I am also working on the system in question in the original question.
 
 I'm not used to creating  or mounting the partitions using the command line 
 options.  I do that during the install using the text gui.  During that 
 process I partitioned the single dasd for just swap and / .  I'd like know 
 what it takes to simply add more and 'tack it on to the end' of the existing 
 partition, if that's even possible.
 
 I am able to bring devices online and do the low level format and am able to 
 see the devices in /proc/dasd/devices... But, I could use more detail after 
 that.
 
 Thanks for any help you can provide.
 
 HH
 -Original Message-
 From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of 
 Grzegorz Powiedziuk
 Sent: Thursday, August 6, 2015 3:16 PM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: Adding DASD to a Debian guest
 
 Can you see them when you do
 cat /proc/dasd/devices   ?
 If not than first bring them online (chccwdev -e 0.0.) and then check 
 again.
 If they are there, than you are ready to do a low level format with dasdfmt  
 /dev/dasdX (/proc/dasd/devices will tell you which dasdX is that).
 After that, create partitions (or not if you don’t want to) with fdasd 
 /dev/dasdX Later you can create LVM (or not if you don’t want to) with 
 pvcreate, vgcreate, lvcreate.
 Last step is creating a filesystem with mkfs.ext4  (or ext3) on a new 
 partition or logical volume. And now, you can mount it.
 
 But you have to know that at this point you are also rewriting cylinder 0 of 
 this DASD  (if it is really attached) so it’s label will change.
 
 
 Let us know if you need more details
 
 Grzegorz Powiedziuk
 
 
 
 On Aug 6, 2015, at 3:04 PM, Cameron Seay cws...@gmail.com wrote:
 
 of course Debian can't see it until it's in a Linux filesystem. We
 don't know how to format it while in Debian.
 
 
 --
 For LINUX-390 subscribe / signoff / archive access instructions, send email 
 to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit http://wiki.linuxvm.org/
 NOTICE: This e-mail correspondence is subject to Public Records Law and may 
 be disclosed to third parties. ––
 NOTICE: This e-mail correspondence is subject to Public Records Law and may 
 be disclosed to third parties. ––

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Re: Adding DASD to a Debian guest

2015-08-06 Thread Grzegorz Powiedziuk
in debian as far as I remember it was just a matter of adding a  new dasd to 
zipl.conf , running zipl and that’s it. 
Gregory Powiedziuk


 On Aug 6, 2015, at 3:17 PM, Mark Post mp...@suse.com wrote:
 
 On 8/6/2015 at 03:04 PM, Cameron Seay cws...@gmail.com wrote: 
 I have attached 3 mod-9s to a guest where Debian is the OS.  Q DASD sees
 the new dasd, but of course Debian can't see it until it's in a Linux
 filesystem. We don't know how to format it while in Debian.
 
 The same as any other mainframe Linux system: dasdfmt.  The bigger question 
 is what tools and configuration files are available/needed to make the 
 volumes persistent across a reboot.
 
 
 Mark Post
 
 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Adding DASD to a Debian guest

2015-08-06 Thread Grzegorz Powiedziuk
Can you see them when you do
cat /proc/dasd/devices   ? 
If not than first bring them online (chccwdev -e 0.0.) and then check 
again. 
If they are there, than you are ready to do a low level format with dasdfmt  
/dev/dasdX (/proc/dasd/devices will tell you which dasdX is that). 
After that, create partitions (or not if you don’t want to) with fdasd 
/dev/dasdX
Later you can create LVM (or not if you don’t want to) with pvcreate, vgcreate, 
lvcreate. 
Last step is creating a filesystem with mkfs.ext4  (or ext3) on a new partition 
or logical volume. And now, you can mount it. 

But you have to know that at this point you are also rewriting cylinder 0 of 
this DASD  (if it is really attached) so it’s label will change. 


Let us know if you need more details

Grzegorz Powiedziuk



 On Aug 6, 2015, at 3:04 PM, Cameron Seay cws...@gmail.com wrote:
 
 of course Debian can't see it until it's in a Linux
 filesystem. We don't know how to format it while in Debian.


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Linux 7.1 on z/VM 6.3

2015-07-24 Thread Grzegorz Powiedziuk
 manually, I get a box that comes up and says my connection
 was gracefully closed and everything dies.  Has anyone ran into this issue?
 
  For grins I reloaded everything and left the partitioning set to auto
 and it successfully installed, but I need to be able to setup my own
 partitioning, so this is not an option for me.
 
 
 
  Rick
 
 
 
  -Original Message-
 
  From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf
  Of
 
  Filipe Miranda
 
  Sent: Monday, July 20, 2015 10:44 AM
 
  To: LINUX-390@VM.MARIST.EDUmailto:LINUX-390@VM.MARIST.EDU
 
  Subject: Re: Linux 7.1 on z/VM 6.3
 
 
 
  Hello,
 
 
 
  RHEL7.x can use the same “installation style” (parm file and conf
 file) as before with a few minor changes (keep in mind that this will be
 deprecated in the future), or the new “installation layout” (just the parm
 file).
 
  The following slide deck shows examples of how the new parm file
 
  should look like:
 
  http://people.redhat.com/fmiranda/systemz/slide_decks/Special%20Even
  t
 
  s /IBM%20T3%20Bobligen%20Germany%202015%20v2.pdf
 
  http://people.redhat.com/fmiranda/systemz/slide_decks/Special%20Eve
  n
 
  t s/IBM%20T3%20Bobligen%20Germany%202015%20v2.pdf
 
 
 
  Kind Regards,
 
 
 
  Filipe Miranda
 
  fmira...@redhat.commailto:fmira...@redhat.com
 
 
 
  On Jul 17, 2015, at 7:17 AM, Grzegorz Powiedziuk 
 gpowiedz...@gmail.commailto:gpowiedz...@gmail.com wrote:
 
 
 
  2015-07-17 9:55 GMT-04:00 Beard, Rick (Atos) rick.be...@xerox.com
 mailto:rick.be...@xerox.com:
 
 
 
  Gregory,
 
 
 
  Looks like your using the kick start method to install.  I just
 
  load the install into memory then SSH into the new image and issue
 
  install at the logon which then allows me to pick the language and
 
  then point to the NFS where the packages exists.  The next step is
 
  does is have me connect to the new image using VNC and I do the
 
  install configurations via a gui then reboot when the install is
 
  done.  For years I have been using the following PARM file.
 
 
 
  root=/dev/ram0 ro ip=off ramdisk_size=4
 
  CMSDASD=191 CMSCONFFILE=xx.CONF-RH7
 
 
 
  That is correct. I think you should be able to modify my parmfile
 
  for you needs. So in your case, the following parmfile should work:
 
 
 
  ro ramdisk_size=4 cio_ignore=all,!condev
 
  CMSDASD=191
 
  CMSCONFFILE=xxx.conf
 
  inst.repo=http://xxx.xxx.xxx.xxx  (I guess you can skip it if you
 
  specify it cond file ) vnc vncpassword=redhat
 
 
 
 
 
 
 
 
 
  And here is the .CONF-RH7 file:
 
 
 
  DASD=200,204
 
  HOSTNAME=..com
 
  NETTYPE=qeth
 
  IPADDR=xxx.xxx.xxx.xxx
 
  SUBCHANNELS=0.0.0500,0.0.0501,0.0.0502
 
  NETMASK=xxx.xxx.xxx.xxx
 
  SEARCHDNS=.com
 
  METHOD=nfs:x:/var/opt/rhel71
 
  GATEWAY=xxx.xxx.xxx.xxx
 
  DNS=xxx.xxx.xxx.xxx
 
  MTU=1492
 
  PORTNAME=
 
  PORTNO=0
 
  LAYER2=0
 
  that looks fine. I have double quotes but that shouldn't matter
 
 
 
  Is there anything else you are getting before the error output
 
  you've presented? Does it even load kernel and print kernel
 parameters?
 
 
 
  Gregory Powiedziuk
 
 
 
  ---
  -
 
  -
 
  - For LINUX-390 subscribe / signoff / archive access instructions,
 
  send email to lists...@vm.marist.edumailto:lists...@vm.marist.edu
  with the message: INFO
 
  LINUX-390 or visit
 
  http://www.marist.edu/htbin/wlvindex?LINUX-390
 
  ---
  -
 
  -
 
  - For more information on Linux on System z, visit
 
  http://wiki.linuxvm.org/
 
 
 
 
 
  
  -
 
  - For LINUX-390 subscribe / signoff / archive access instructions,
 
  send email to lists...@vm.marist.edumailto:lists...@vm.marist.edu
  with the message: INFO LINUX-390
 
  or visit
 
  http://www.marist.edu/htbin/wlvindex?LINUX-390
 
  
  -
 
  - For more information on Linux on System z, visit
 
  http://wiki.linuxvm.org/
 
 
  --
  For LINUX-390 subscribe / signoff / archive access instructions, send
 email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
  http://www.marist.edu/htbin/wlvindex?LINUX-390
  --
  For more information on Linux on System z, visit
 http://wiki.linuxvm.org/


 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
 visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org

Re: Linux 7.1 on z/VM 6.3

2015-07-17 Thread Grzegorz Powiedziuk
Oh yes, these @@@ are just being used by linux scripts for finding the
right string before doing sed on them. The final output that goes to a cms
disk has real values and is being translated to ebdic.

Gregory Powiedziuk

2015-07-17 9:56 GMT-04:00 Alan Altmark alan_altm...@us.ibm.com:

 On Friday, 07/17/2015 at 09:33 EDT, Grzegorz Powiedziuk
 gpowiedz...@gmail.com wrote:
  I've installed dozens of rhel70 and rhel71
  here are my templates fot 71 (I am using my own homegrown web interface
 for
  auto and easy virtual machines creation and rhel installation without
 even
  touching 3270 )

 When I hear hoofbeats, I always think 'zebras!'  Since your config files
 are coming from CMS, make sure that the @ symbol is being translated
 correctly by cmsfs.  There are a lot of EBCDIC code pages that do not have
 @ as 0x7C.

 Of course, if you've ssh'd into the Linux guest, looked at the config
 file, and have seen '@', then you're ok in that respect.

 Alan Altmark

 Senior Managing z/VM and Linux Consultant
 Lab Services System z Delivery Practice
 IBM Systems  Technology Group
 ibm.com/systems/services/labservices
 office: 607.429.3323
 mobile; 607.321.7556
 alan_altm...@us.ibm.com
 IBM Endicott

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
 visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Linux 7.1 on z/VM 6.3

2015-07-17 Thread Grzegorz Powiedziuk
2015-07-17 9:55 GMT-04:00 Beard, Rick (Atos) rick.be...@xerox.com:

 Gregory,

 Looks like your using the kick start method to install.  I just load the
 install into memory then SSH into the new image and issue install at the
 logon which then allows me to pick the language and then point to the NFS
 where the packages exists.  The next step is does is have me connect to the
 new image using VNC and I do the install configurations via a gui then
 reboot when the install is done.  For years I have been using the following
 PARM file.

 root=/dev/ram0 ro ip=off ramdisk_size=4
 CMSDASD=191 CMSCONFFILE=xx.CONF-RH7



That is correct. I think you should be able to modify my parmfile for you
needs. So in your case, the following parmfile should work:

ro ramdisk_size=4 cio_ignore=all,!condev
CMSDASD=191
CMSCONFFILE=xxx.conf
inst.repo=http://xxx.xxx.xxx.xxx  (I guess you can skip it if you specify
it cond file )
vnc vncpassword=redhat




 And here is the .CONF-RH7 file:

 DASD=200,204
 HOSTNAME=..com
 NETTYPE=qeth
 IPADDR=xxx.xxx.xxx.xxx
 SUBCHANNELS=0.0.0500,0.0.0501,0.0.0502
 NETMASK=xxx.xxx.xxx.xxx
 SEARCHDNS=.com
 METHOD=nfs:x:/var/opt/rhel71
 GATEWAY=xxx.xxx.xxx.xxx
 DNS=xxx.xxx.xxx.xxx
 MTU=1492
 PORTNAME=
 PORTNO=0
 LAYER2=0


that looks fine. I have double quotes but that shouldn't matter

Is there anything else you are getting before the error output you've
presented? Does it even load kernel and print kernel parameters?

Gregory Powiedziuk

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Linux 7.1 on z/VM 6.3

2015-07-17 Thread Grzegorz Powiedziuk
I've installed dozens of rhel70 and rhel71
here are my templates fot 71 (I am using my own homegrown web interface for
auto and easy virtual machines creation and rhel installation without even
touching 3270 )

conf file template:
HOSTNAME=@@@hostname@@@
DASD=200
NETTYPE=qeth
SUBCHANNELS=0.0.2000,0.0.2001,0.0.2002
IPADDR=@@@ip_address@@@
NETMASK=@@@netmask@@@
GATEWAY=@@@gw@@@
PORTNO=0
LAYER2=1
VSWITCH=0
DNS=@@@dns1@@@:@@@dns2@@@
MTU=1500

parm file template:
ro ramdisk_size=4 cio_ignore=all,!condev CMSDASD=192
CMSCONFFILE=@@@hostname@@@.conf RUNKS=1
inst.ks=http://10.50.223.250/vmwiz/punch_files/@@@hostname@@@.ks

kickstart file (which is hosted on 10.50.223.250vmwiz/punch_files in my
case) :

#version=RHEL7.1
# System authorization information
auth --enableshadow --passalgo=sha512
cmdline
reboot
# Use network installation
url --url=http://10.50.223.250/rhel71;
# Run the Setup Agent on first boot
firstboot --enable
ignoredisk --only-use=dasda
# Keyboard layouts
keyboard --vckeymap=us --xlayouts='us'
# System language
lang en_US.UTF-8

# Network information
network  --bootproto=static --device=enccw0.0.2000 --gateway=@@@gw@@@
--ip=@@@ip_address@@@ --mtu=1500 --nameserver=@@@dns1@@@,@@@dns2@@@
--netmask=@@@netmask@@@ --noipv6 --activate
network  --hostname=@@@hostname@@@
# Root password
rootpw --iscrypted BLABLABLA
# System timezone
timezone America/New_York
# System bootloader configuration
bootloader --location=mbr --boot-drive=dasda
# Partition clearing information
clearpart --all --initlabel
zerombr
# Disk partitioning information
# part pv.23 --fstype=lvmpv --ondisk=dasda --size=3071

part /boot --fstype=ext4 --ondisk=dasda --size=120
part pv.23 --fstype=lvmpv --ondisk=dasda --size=1 --grow
#  /boot no longer works in LVM in Redhat 7.1
volgroup rhel_@@@hostname@@@ --pesize=4096 pv.23
logvol /  --fstype=ext4 --size=2946 --name=root --vgname=rhel_@@@hostname@
@@

%packages
@core
wget
%end

Gregory Powiedziuk



2015-07-17 8:38 GMT-04:00 Beard, Rick (Atos) rick.be...@xerox.com:

 It's got plenty of memory and CPU.  I have built a new z/VM 6.3 SSI system
 and only have one guest running in it so far on Red Hat Linux version 6.5.
 I wanted to see about running version 7.1, but it appears Red Hat must have
 made some changes on how to install it as my PARM file, that I have been
 using for years on multiple versions, is not working on this version.  I
 have a case open with them and will let you what I find out.

 Rick

 -Original Message-
 From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
 Mark Post
 Sent: Thursday, July 16, 2015 1:56 PM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: Linux 7.1 on z/VM 6.3

  On 7/16/2015 at 01:41 PM, Rick Troth r...@casita.net wrote:
  How much memory?

 I would be more interested in knowing how much CPU is available.  It
 sounds like the guest isn't getting much, if any, of a time slice to run in.


 Mark Post

 --
 For LINUX-390 subscribe / signoff / archive access instructions, send
 email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit http://wiki.linuxvm.org/

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
 visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


  1   2   >