Re: Remove DASD device
On 05/30/2017 02:50 PM, Marcy Cortes wrote: > Did you do a "vgreduce VGname /dev/dasd*" > > If you missed that step, you can probably fix it with "vgreduce > --removemissing VGname" Hi Marcy, Yes, I did that followed by pvremove on each device. > You'll want to get rid of them from linux too if you haven't already. > "dasd_configure 0.0. 0 0 " should do it. Thanks. That did it!I'll include this step from now on! Thanks! Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Remove DASD device
Hi, We have a situation with two SLES11 servers. We had to migrate the underlying PVs used for the swap logical volume (we used pvmove to move from DASD to FCP LUNs) and then we removed the DASD PVs from the volume group (followed by pvremove on them to wipe LVM metadata). After that I called my z/VM admin and asked him to remove the corresponding DASD devices. He did but I still have the device files associated with them (/dev/dasd*)... The problem is that, whenever I type any LVM command, the command gets stucked (doesn't return the prompt). I guess vgscan is getting stuck reading these non existing DASD device files. I could fix this by excluding these devices (LVM filter) or by rebooting but I'm wondering if there's a a way to remove these DASD device files dynamically? For regualr SCSI devices one usually performs: echo offline > /sys/block/$DISK/device/state echo 1 > /sys/block/$DISK/device/delete ...but I don't see "state" nor "delete" for /sys/block/dasd*/device/* and Google didn't help that much. Thanks in advance! Regards, Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Oracle DB Certification on SLES12
On 05/11/2017 08:02 PM, Dominic Coulombe wrote: > Is this what you're looking for? > > http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/FLASH10877 Hi Dominic, You've nailed it! Thanks for the link! I had no idea about these Techdocs Library Flashes! I'm glad to hear it's finally certified! We'll be testing it shortly. Thank you! Regards, Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Oracle DB Certification on SLES12
Hi everyone, Does anyone here knows if we will ever see Oracle certify its DB against SLES 12 on s390x? It has been a while since they certified it against x86. If I ask IBM about it they (appropriately) tell me to ask Oracle. If I ask Oracle they tell me to open a support case. If I open a case they just send me to the current certification matrix. It's a mystery. We'd like to know about this before SLES 11's General Support ends in 2 years. Thanks! Best regards, Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: SLES 12 - to btrfs or not to btrfs
On 08/17/2016 12:14 PM, Mark Post wrote: > I'm hoping that when people are saying "SLES12" they are really > meaning "SLES12 SP1 or later." SLES12 GA has been out of support for > a while now. Oh, definitely. For sure :) As soon as we jump in we'll grab the latest... > With SLES12, zipl really doesn't play much of a role. You still need > it, but it only gets executed when grub2 gets [re-]installed. That's > why you won't see a /etc/zipl.conf any more. Great to know. > Something that _is_ a consideration are btrfs subvolumes. The way > the rollback feature works, everything that is in a subvolume of / is > _not_ included in the snapshots and hence cannot be rolled back. > That's why you see things like /var/log, /home, /opt and so on as > subvolumes. So, don't go creating subvolume without keeping in mind > they won't be part of a rollback. Ok, so we'll leave the defaults then - at least for the OS stuff. Thank you Mark, as always, for your feedback! Regards, Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: SLES 12 - to btrfs or not to btrfs
On 08/16/2016 02:45 PM, Marcy Cortes wrote: > Was wondering what other people have decided to do for their file systems in > SLES 12. > Stable tried and try ext3 or new function (and more space) with btrfs. I'm looking forward to btrfs on SLES 12; specifically its snapshot capabilities and how we can use it to perform system rollbacks (the integration of zypper, snapper & GRUB2 to accomplish this). Hmm, now that I mention GRUB2, Is this system rollback functionality & integration with the other tools supported with zIPL? Regards, Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: SLES 12 - to btrfs or not to btrfs
On 08/16/2016 06:20 PM, Rick Troth wrote: > The problem with EXT3/4 is mostly that the EXT*FS family has fallen out > of popularity. They're rock solid. They just work Is ext4 available con SLES 12 (s390x)? We haven't tried SLES 12 yet as we're waiting for Oracle to certify its database against it (don't know why is taking them so long). Regards, Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: SLES 11 SP4 - Kernel locking up
On 08/03/2016 05:30 PM, Marcy Cortes wrote: > Yes, we saw it and someone else here did last week too. > SUSE has a test fix for it. > > Has to do with running 32bit programs. ILMT was the one that we discovered > it with. Thanks Marcy! I googled it for a while but couldn't find it. I'll search the list more carefully next time :) I guess I'll ask for the patch right away! All the best, Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
SLES 11 SP4 - Kernel locking up
Hi everyone, We're having some kernel locking issues after installing the latest kernel for SLES 11 SP4 (kernel-default-3.0.101-77.1). This is on two separate Linux guests and it happens - right away- after starting some network applications: Linux completely freezes. If we go back to the previous kernel everything runs fine. I've opened a case with SUSE & we'll be uploading a z/VM dump of the guests but wondered if anyone else has seen this before (recently)? Thanks! -- Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: New LUNs on SLES 11 SP4
On 01/24/2016 07:37 PM, Mark Post wrote: > I haven't tried it myself, but perhaps the zfcp_san_disc command would be of > use here. Thanks Mark. That's a new one to me. I'll try the next time I have an opportunity with changes on the fabric (other than new LUNs). Regards, Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: New LUNs on SLES 11 SP4
On 01/25/2016 07:47 PM, Raymond Higgs wrote: > Zfcp_san_disc, and many other tools typically only display what Linux > already knows about. If you start with the device offline, then it'll > bring the device online and show you what is currently in the fabric. If > you start with the device online, then it shows what is in /sys and that's > it. It doesn't try to update anything. I didn't know this. Great to know. > Each device has a file that can be used to rediscover changes in the > fabric: > > echo 1 > /sys/bus/ccw/drivers/zfcp//port_rescan > > This is nondisruptive. You don't have to toggle the device off and on. Got it. Will do from now on. > I don't have access to an SP4 system at the moment to see if any of the > tools use port_rescan. I checked all of them (zfcp_* & rescan-scsi.bush.sh) since they're all shell scripts and none of them have the keyword "port_rescan" within them. I guess doing the "echo 1 > ..." is the way to go to grab changes in the fabric. Thanks Ray for your time & the very informative reply! Best regards, Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: New LUNs on SLES 11 SP4
On 01/23/2016 02:29 PM, Offer Baruch wrote: > Did you try simply taking the fcp devices offline and then online again? > That should be enough... (although that shouldn`t be necessary in the first > place). Hi, No. I haven't for these initial deployments but now that you mention it, I remember once we had to recognize some new "targets" ( new LUNs were presented thru some new ports on the storage-array) and rescan-scsi-bus.sh didn't help. It wasn't until we turned off & on the FCP devices (thru zfcp_host_configure) that we were able to see these LUNs. I guess this is what's happening: We enabled both FCPs when there weren't any zones created yet. It appears that rescan-scsi-bus.sh only works to recognize new LUNs thru *existing* known targets (but if targets aren't recognize nothing shows up). In a nutshell, on a new deployment this seems to be way to go: 1) FCP devices are configured for guest 2) FCP devices are put online & configured for startup activation (zfcp_host_configure) 3) WWPN are gathered to request zones & LUN assignment 4) zones are created & LUNs presented 5) some MISSING COMMAND here to recognize these new targets 6) rescan-scsi-bus.sh to see these LUNs. There's got to be something better than offlining/onlining the FCP devices for step #5 here. The question still remains: shouln't rescan-scsi-bus.sh pick up these new targets as well? Thanks, Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
New LUNs on SLES 11 SP4
Hi, We have an issue with new deployed systems from our SLES 11 SP4 template (They all have "zfcp.allow_lun_scan=1" on their zipl.conf file). Whenever the z/VM admin deploys a new image, he would assign two FCP devices. After that we take control of the system and: 1) verify FCP devices are present 2) enable them via "zfcp_host_configure 0.0.fxxx 1" 3) take note of their WWPN and request LUNs from our storage team The problem is that after they assign LUNs we'll do a "rescan-scsi-bus.sh" and nothings shows up. "lsluns" & "multipath -ll" doesn't show anything as well. It is not until a reboot of the system that we start seeing these LUNs. The strange thing is that after that initial reboot, whenever the storage team assigns new extra LUNs, we'll see them right away with rescan-scsi-bus.sh. I know zfcp_host_configure places the necessary FCP activation rules on the udev directory but I thought it would load all necessary modules as well (so there was no need for a restart). Does anyone knows what are we missing? How can we avoid that initial reboot after activating HBAs? Thanks, Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Compression - Offloading
On 09/17/2015 09:15 AM, Jorge Fábregas wrote: > Is there some specialty processor in the mainframe implementing either > gzip, bzip2 or lzma that one could offload compression tasks to? Thank you guys for pointing me to zEDC. That's indeed what I was looking for. We have the zEC12 but I was told we don't have the zEDC card :( Thanks! Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Compression - Offloading
Hi, I'm new to Linux on Z. I'm setting up a syslog server which is going to receive a bunch of data and, of course, I'll be using logrotate to rotate logs & *compress* them. I want to be a good neighbor when it comes to processing time so Is there some specialty processor in the mainframe implementing either gzip, bzip2 or lzma that one could offload compression tasks to? If so, which Linux utility or configuration do I need for that? Thanks, Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Watchdog Timeout Value [SOLVED]
On 09/15/2015 05:43 PM, Neale Ferguson wrote: > The underlying CP service Diagnose x’288’ takes a parameter specifying the > time bomb interval in seconds. So it’s a function of the module. > > According to the source code of vmwatchdog.c there is an IOCTL that may be > issued that will change the default of 60 to whatever value you care to > put there. So you will need to write a small program that can issue the > ioctl with the WDIOC_SETTIMEOUT option. You will need to #include > Hi Neale, Thanks for the feedback! After your post I decided to search about IOCTLs and remembered seeing that on the logs (my watchdog application is SBD, used for self-fencing, on SUSE High-Availability). Now that I see the logs carefully I see this: ERROR: WDIOC_SETTIMEOUT: Failed to set watchdog timer to 5 seconds.: Invalid argument That's what I saw originally and then did a "modinfo vmwatchdog" and saw that there were no parameters to set the timeout and I said to myself "ahh that's why SBD can't set it". I had no idea about an IOCTL! Well, it turns that that's the mechanism used by SBD (so I didn't have to write the small C program). I kept thinking about that "invalid argument" part. It turns out the minimum allowed value is 15 seconds! I've recreated my SBD device (setting it to 15 seconds) and it worked; changed it to 14 seconds and it fails. So that was it. I'm perfectly fine with 15 seconds. Thanks a bunch for the info & hints that guided me to the solution! All the best, Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Re: Watchdog Timeout Value
On 09/14/2015 06:31 PM, Jorge Fábregas wrote: > If I can't change it via the module, is this timeout configurable at the > z/VM level (so I can tell the z/VM admin)? Bump :) -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
Watchdog Timeout Value
Hi, I need to use the watchdog provided by z/VM (with the vmwatchdog kernel module) but I've just found out its "timeout value" isn't configurable. There's no module parameter for that. It's hardcoded at 1 minute (I had plans to change it to 5 seconds or 10 seconds). If I can't change it via the module, is this timeout configurable at the z/VM level (so I can tell the z/VM admin)? Thanks! Jorge p.d. don't want to use softdog :) -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
snIPL vs SBD for STONITH
Hi, I'm about to start a High-Availability project with SLES 11 SP4 where I plan to use snIPl (to fence Linux guests via z/VM). I know there's the SBD (storage-based fencing) as well (that SUSE seems to promote a lot) and wondered if anyone here had any experience with one or the other (which one do you think it's better). Please share. Thanks! -- Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/
tuned-adm for SLES 11?
Hi, I'm new on the list, so hello to everyone! Is there something similar to RHEL's tuned-adm in SLES 11? This is a tool that includes some predefined profiles for typical use cases (virtual guest, storage-server, desktop etc) and, when you choose a profile, it goes out and tweaks all tunnables/configurations; things like the I/O scheduler, vm.swappiness etc. If there isn't such a thing, are there any major changes you do after an installation over z/VM? I'll be preparing a new SLES 11 SP4 image (to be used as golden image) and would like to tweak it accordingly. I found the IBM z/VM Linux Cookbook (for SLES 11 SP1) but it's like 90% z/VM stuff :( Maybe I'm worrying too much the System-Z SLES installer produces a system with reasonable defaults (with no further enhancements needed). If anyone knows a good document please let me know. Thanks! -- Jorge -- For LINUX-390 subscribe / signoff / archive access instructions, send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit http://www.marist.edu/htbin/wlvindex?LINUX-390 -- For more information on Linux on System z, visit http://wiki.linuxvm.org/