Interesting alternative to putty

2022-09-05 Thread David Boyes
Stumbled across a interesting putty alternative:

https://github.com/kingToolbox/WindTerm

VERY good vtxxx emulation (good enough for use with OpenVMS which seriously 
exploits VTxxx features) and approximately double the file transfer speed.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Running SLES9 on z14 under zVM 6.4

2021-10-16 Thread David Boyes

On 12/8/20, 5:42 PM, "Linux on 390 Port on behalf of Paul Gilmartin" 
 wrote:
> z/VM doesn't hide the hardware.  In just about all cases, if it won't run
> in an LPAR, it also won't run under z/VM.
I recall a counterexample was OpenSolaris.  It wouldn't run in
an LPAR, but only under VM.  I suspect the matter was not "hid[ing]
the hardware" but required CP services not available from the hardware.

FWIW, with OpenSolaris, it was a deliberate decision on our part to require 
z/VM, mostly because it made it easier/simpler/nicer to implement disk and 
network device support -- it was a lot easier to let z/VM handle real cylinder 
0 and allocate/treat minidisks as sequences of blocks with all the standard IBM 
labels and info present (DIAG 250 allowed CMS formatted/reserved minidisks to 
more easily map to traditional Unix I/O block devices, and we took advantage of 
the caching infrastructure and system instrumentation already present in CP to 
better integrate with the idea of running lots of production Solaris guests in 
virtual machines) and to write network device drivers via DIAG 2A8 since IBM 
didn't want to document the low-level hardware details of how OSAs worked at 
that point in time. 

With Solaris ZFS, the small size of traditional Z ECKD disks and the limitation 
on minidisk size were irrelevant, and it played well with the hardware without 
having to resort to special tricks to optimize around the Z I/O architecture. 
All the available VM backup and performance tools already understood that 
environs at the time, and we didn't need to invent separate tools to prepare 
disk media or deal with Z-specific hardware problems. 

Wrt the discussion at hand, IIRC, if you were in a z/Arch machine, it IPLed 
directly into z/Arch mode, otherwise we had OpenSolaris initially IPL in ESA 
mode and then programmatically upgrade to z/Architecture mode ASAP thereafter, 
like the PoOps at the time said to do. We didn't worry about LPAR mode or bare 
metal because a) we didn't want it, b) nobody else wanted it, and c) we're 
VMers and already have a superior solution. __ It wouldn't have been able to do 
disk or network I/O in its current form, but someone writing the necessary 
device drivers would solve that problem.

Thinking about it, it probably wouldn't be a huge lift to make OpenSolaris run 
on LPAR/bare metal now if we had to, but somebody would still have to convince 
me why it's a good idea if z/VM is available.

It would be very interesting to see how it would behave in a modern SSI 
environment - I think the approach we took would make it possible to migrate
it more easily than Linux.


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: CLEF OS update (yum)

2021-03-08 Thread David Boyes
Should have been. I'll poke the infrastructure guys.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: CLEF OS update (yum)

2021-02-12 Thread David Boyes
We're in the middle of moving to ARIN-registered IP addresses in production and 
a few hiccups have surfaced with some of our Kerberos infrastructure that got 
resolved midday-ish. Give it a few hours for the old DNS entries to time out, 
and everything should be ok.

On 2/12/21, 1:15 PM, "Linux on 390 Port on behalf of Frank M. Ramaekers" 
 wrote:

Hmmm...is there something wrong with sinenomine.net?


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: RHEL 8.3 install stuck

2020-12-09 Thread David Boyes
Update your phone to the current IOS. This is a known bug in IOS 14 GA.

You'd think after a while, they'd quit fooling with their MIME implementation. 
Ain't broke, don't fix it.

On 12/8/20, 1:20 PM, "Linux on 390 Port on behalf of Paul Gilmartin" 
 wrote:

Your mailer or your configurartion is broken.  All I see in my
viewr is the Velocity logo .gif because that's the favored
alternative:

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Running SLES9 on z14 under zVM 6.4

2020-12-09 Thread David Boyes
On 12/8/20, 5:42 PM, "Linux on 390 Port on behalf of Paul Gilmartin" 
 wrote:

> On 2020-12-08, at 14:12:21, Bruce Hayden wrote:
>
> z/VM doesn't hide the hardware.  In just about all cases, if it won't run
> in an LPAR, it also won't run under z/VM.

>I recall a counterexample was OpenSolaris.  It wouldn't run in
>an LPAR, but only under VM.  I suspect the matter was not "hid[ing]
>the hardware" but required CP services not available from the hardware.

Bruce is correct. Without ESA/390 mode support in the processor, something as 
old as SLES9 won't run reliably as a VM guest. VM doesn’t simulate support 
that's not in the hardware and I think you'll spend a lot of time chasing 
snarks when things break unexpectedly. Probably time to just bite the bullet 
and build new systems.

Reflecting on OpenSolaris, we deliberately and intentionally wrote the code to 
exploit VM services because at the time we felt that LPARs were insufficiently 
flexible and too difficult to manage to be cost effective, and anyone running 
it probably also had VM to support Linux. Nothing to do with the hardware (we 
avoided anything that depended on hardware level where we could so it would run 
on the maximum number of systems); it was easier to do that way. DIAG 250 made 
disk support a lot simpler, and the DIAG 2A8 networking did the job after IBM 
declined to give us enough information on the OSA to write a proper driver. We 
were able to exploit a lot of the prior wisdom on running operating systems 
efficiently as VM guests; no point in wasting effort re-inventing the wheel 
when there was an easier way right in front of us.

In retrospect, I wish OpenSolaris had taken off. It would have saved a lot of 
the recent annoyance we seem to be experiencing with the changes in device 
naming and bloat of the Linux environment (and dealing with the whole systemd 
aberration). Solaris is a production grade Unix system that works well - ZFS 
would have been a game changer at the time.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Modify ifcfg-encw0.0.nnnn at DR

2020-10-07 Thread David Boyes
On 10/7/20, 12:53 PM, "Linux on 390 Port on behalf of Alan Altmark" 
 wrote:

> I have talked up DHCP's ability to use a user ID instead of a 
> MAC address

Option 61 (the DHCP option that allows the string option) can be any unique 
arbitrary string. It can be used to request information for point-to-point 
links that are not broadcast capable as long as there is one broadcast-capable 
address available to locate the DHCP server. The DHCP server gets the string 
and does its thing so 'mysystem-hsi1' or 'mysystem-ctc0' is a perfectly valid 
use of the capability. 

> IMO, the ease of making VM TCP/IP come up with a different configuration 
> file based on external criteria eliminates the need for a DHCP client. 

In a limited way, but why have to remember to do it or fool around with 
node/system name qualifiers in TCPIP CONFIG if there is a way to not have to 
change anything and have it Just Work AND be consistent with every other host 
and address management process in the whole world? Seems like one big battle we 
don't need to fight. 

It would also let you ship a completely functional TCPIP out of the box, no 
configuration required - just plug in the OSA, attach it to TCPIP at a 
predefined address and you're up and running. Think of the documentation pages 
saved... __

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Modify ifcfg-encw0.0.nnnn at DR

2020-10-06 Thread David Boyes
>  If you can login to root at the 3270 console, you can issue an ifconfig (or 
> an ip) 
> command to change the address, then a route command to set the default route.

Or make sure you set a unique MAC address for each network adapter in the CP 
directory and use DHCP to assign a static address to each MAC address. 

That setup will allow acquiring all the network details for the DR using a 
simple down/up of the interface or a reipl. If you do the same at your primary 
site, then none of the addresses are hard-coded anywhere in the guest and you 
reliably get the right address for whichever location, plus fixing DNS servers, 
default routes and a couple pages of more things automagically. If you also 
enable DDNS on your DNS server and add a DDNS client on your guests, then you 
don't have to change the addresses your DNS entries point to either. Everything 
Just Works (tm) and you don't have to touch a thing. If your DHCP and DNS 
servers live in VM guests, then you speed time to recovery by putting them in 
the fastest restoring system: if you're in a real disaster, you can have the 
basic networking infrastructure running in less than an hour from a one or 
two-pack system while you get everything else fixed around it. Properly 
planned, you can have networks of hundreds of thousands of systems be 
self-configuring -- the Linux DHCP code supports multiple configurations with 
ease and you're not running around messing with address assignments while 
things are on fire.

It's long overdue for the mainframe network stacks to permit this method of 
address management. DHCP/DHCP6 is reliable, highly scalable and handles static 
address assignments with ease. It handles both IPv4 and IPv6 easily and is one 
less unique thing about mainframes -- it's what the distributed folks use for 
their environments at massive scale. Why be pointlessly difficult/different? 
The VM, VSE and TPF stacks are all layer 2 capable, and could easily be 
convinced to act as DHCP clients.

Now, if z/OS would get with the program and handle layer 2...

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Spectrum Scale and z/VM

2020-09-28 Thread David Boyes
> Can Spectrum Scale (GPFS) be used as the disk to install z/VM?

Implementing a hardware-based FCP target on Linux on Intel or Windows is 
possible (the code for Linux is in the iscsi-support package for specific 
Qlogic brand FCP adapters on Intel), but I suspect it would take some 
development on the hardware microcode in the FCP adapter on Z that touches some 
stuff that IBM doesn't want to make public about the adapter to make it work 
efficiently enough in the Z world to be viable (FCP cards generally have some 
kind of TCP offload to the hardware in the adapter). Backing that host with the 
special adapter by an existing Linux/Unix or Windows filesystem on the Intel 
hardware, performance would be a question of the hardware running the FCP 
target software and the underlying filesystem. GPFS is pretty good at handling 
high-performance parallel I/O (the high-performance computing guys do this kind 
of thing all the time) so it's not a totally crazy idea, but you'd probably 
need some majorly beefy Intel hardware to do a reasonable job of it at the 
moment. Work out how the fiber switch connections work for an Intel box with 
the right hardware adapters, connect it to your FCP SAN, configure some FCP 
targets and it's a DIY solution that can support the filesystem of your choice. 
Alan's comparison to SVC is an apt one.

Wishlist item: a version of the FCP adapter for Z that had an iSCSI client as 
well as FCP support; everything seems to be moving in that direction and it's 
not like IBM or others haven't done something like it before for the RISC world 
- kind of an ICC-like thing for disk.
It might need to be beefier to support the adapter iSCSI client IP stack and 
probably wouldn't be production-level performance, but the chips to do it 
already exist and can take advantage of economies of scale effects - look at 
how iSCSI has exploded in the Intel world. It would be a cool thing to not have 
to care who makes your storage hardware or where it's located any more, even if 
it's relatively low performance storage: it's just IP packets at that point. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Poll: national language support for SWAPGEN?

2020-08-27 Thread David Boyes
With z/VM 7.2, IBM has withdrawn support for the last non-English language for 
help files and messages. Is there any real need for continuing to support the 
German, Japanese and uppercase English help files and messages for SWAPGEN? If 
the general consensus is no, then I’ll put that on the list to remove for the 
next release.

If there are any new features people want, let me know offlist so I can start 
taking a look at how to do them.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Cisco Tetration alternatives

2020-08-13 Thread David Boyes
> I have been asked to look for an alternative to Cisco’s Tetration product that
> will run on s390x, they apparently no longer support the platform.


Tetration is a pretty big package of functions; some of the individual pieces 
can be replaced by open-source tools that build and run on zLinux, but there is 
a fairly large amount of glue involved. We'd probably need to know more about 
what pieces of Tetration you're trying to replace to give you a better idea of 
replacements.

For example, Apache OpenDaylight has some of the SDN functions 
(www.opendaylight.org, code at https://github.com/opendaylight). The AAA parts 
can be built on top of OpenRADIUS, and the VPN parts can be built on OpenVPN, 
both of which build and run fine on zLinux. Snort can be built on zLinux; may 
be in some of your distribution supported RPM archives. Lots of dark corners 
that may not have good replacements.

You may be able to run the Tetration management code on another platform 
managing some of the infrastructure pieces above on zLinux, which would give 
you the nice management GUI,  but I haven't had an opportunity to try that. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: VM system name

2020-08-05 Thread David Boyes
> Not so sure about the format, though - the one above is hard to read, and I
> would claim it might be prone to produce errors on all parties involved - e.g.
> whenever we add further fields. Something that correlates the values to a 
> field
> name is preferable - maybe something like JSON could do the job here!

Which was kind of the point of the comment; most of the things that will 
consume the output of the command are not human, so readability isn't a concern 
unless you request the  extra icing of human-readable output via --verbose. 
Parsing simplicity is the paramount concern here.

I'm not a huge fan of JSON, but I can see that it would be a useful method;  
might also consider XML as well. The semantics of both XML and JSON processing 
would be expensive to generate (lots of wrapping/unwrapping involved). 
Awk-friendly is good. __

> One could always do that by post-processing the output. Also, this would need
> some semantics: We should likely count virtalization layers, not levels. 
> Because
> e.g. z/VM can or can not have a Resource Pool defined. As one does not know in
> advance, it should not be counted. But then again, that makes it a bit more
> complicated. I'd likely push that out as a future item.

Fair enough, was just an idea of some useful function. Since you have a record 
type field available, I had envisioned it as a kind of variant record that you 
parsed depending on what the record type is. If you keep the record type in 
each line of the output, expansion for new values wouldn't be a big deal.

> Something like that is already available in the underlying library, although 
> it
> counts levels/layers, not _virtualization_ levels. Makes sense to keep it 
> that way.

Ok. Since I can't see the capability of the code you've got to play with, that 
seems to be a good place to start.

> There's a field like that available, but not in every layer. However, this is
> really easy to derive from the output, so not sure if I'd want to add it...

Yeah, there's always that argument of what combinations of features merit 
packaging as a command-line option as a shortcut. I would find that number 
useful as a "feature", YMMV. 

> 6. (nit) in the verbose output, provide a way to provide a format string 
option, eg:
> 
> ./qc_test --verbose --format="30:8.3" 
> 
> qc_capability [S ] : 552.000
> qc_secondary_capability [S  ]  : 552.000
> qc_capacity_adjustment_indication [S  ]: 100.000
> qc_capacity_change_reason [S  ]:   0.000

> Uh, yeah, well, another 'future' I would say.

Yeah, ok. There are some libraries in the GNU world that provide this kind of 
function generically, but agree that it doesn't need to be there day 1. The 
idea was to make the output more useful/easier to parse in languages that have 
fixed format input records (eg COBOL or PL/1), but no matter. Works for me.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: VM system name

2020-07-29 Thread David Boyes
On 7/22/20, 9:29 AM, "Linux on 390 Port on behalf of Stefan Raspl" 
 wrote:
> Would people find it helpful if a command is introduced to the
> s390-tools package that will return one or more of the following data
> points:
> 1. z/VM or KVM Guest name
> 2. z/VM Host name
> 3. KVM Host name
> 4. LPAR name
> 5. CEC name

Some thoughts on this, in no particular order:

1. The output from qc_test you showed should require a --verbose option. Most 
uses for this data will programmatic, so something like 1 line per level, space 
delimited would be most useful. If you can parse it easily with classic Bourne 
shell, you got it right. 

Example: 

./qc_test without --verbose produces:

0 guest 5 1 GUEST43 off 0 0 undef undef 1 1 0 0 0 1 0 1 undef undef 
1 pool 4 3 pooltest 0 0 0 0 1 64467763 undef undef undef 
2 hyper 3 2 "MY_ZVM" undef "z/VM 6.3.0" 500 1 0 3 0 3 1 0 1 2 undef undef 
3 lpar 
4 cec .
.
etc on stdout. The --verbose version can show all the human friendly labels and 
formatting.

2. Most uses of this will be most interested in the layer closest to them, eg 
starting at the VM guest level and going outward (the reverse of how your 
example is formatted).

3. It might be useful to say "I'm not interested in data that is more than X 
levels from me" to reduce processing, ie something like --maxlevel . ./qc_test --maxlevel 2 from the example above would return data from 
guest, pool and hypervisor, but drop anything beyond.

4. An option to return only the number of virtualization levels present would 
be helpful to set loops, eg ./qc_test --levels in your example returns 5 as its 
only output.

5. a flag to test for multiple CPU types or not would be helpful (you can get 
what they are from the whole output, just a 1/0 flag to indicate the presence 
of a mixed LPAR), in a LPAR with both IFLs and standard engines ./qc_test 
--mixed would return 1 so you need to go look at the full data to find out the 
whole picture, or you could return the counts of processors of each type after 
the 1/0 if you felt like it.

6. (nit) in the verbose output, provide a way to provide a format string 
option, eg:

./qc_test --verbose --format="30:8.3" 

.
.
.
qc_capability [S ] : 552.000
qc_secondary_capability [S  ]  : 552.000
qc_capacity_adjustment_indication [S  ]: 100.000
qc_capacity_change_reason [S  ]:   0.000
.
.
.

I'd find a utility like that very useful indeed.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: zLINUX and z/VSE

2020-06-24 Thread David Boyes
>I have a customer that recently installed a new z/14.

>They are a relatively small shop, they currently run z/VM and a couple of 
>z/VSE  guests.

>The z/14 has an IFL.

>The manager asked me 'what can we do with that IFL?'



  *   Run NJE software on Linux and use it to drive printing for VSE to a wide 
range of printers (Postscript, PCL, desktop, print by email attachment, PDF 
generation and authentication/verification of source)
  *   DB2 offload
  *   Utility functions like zipping up downloads w/o using your standard 
engine cycles.
  *   Web based display of VSE output instead of hardcopy printing
  *   Automated insertion of VSE output into document management systems for 
archival and review
  *   Self-service pw reset and retrieval (with a little coding)
  *   3270 emulator delivery in browser
  *   VPN access to VSE and your general network
  *   Pager support for VSE events
  *   Email selected VSE events from log files.
  *   Automated accessibility testing and monitoring for VSE-based services 
(health checks)
  *   ….



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Development Environment for s390x

2020-06-01 Thread David Boyes
On 6/1/20, 12:20 PM, "Linux on 390 Port on behalf of Mark Post" 
 wrote:
> this was from David Boyes on February 21, 2000:
> "... I have successfully (albeit slowly) booted NT Server 4.x (Intel)
> under bochs on L/390, and successfully run MS Exchange for Intel
> straight off the BackOffice CDs for a trivial number of users. "
> 
> Then, on September 11, 2002, Adam Thornton posted this link to an image
> of Windows NT running on a Linux guest on z/VM:
> http://www.fsf.net/~adam/NT-on-390-desktop.png
> When the snapshot was taken, they were using Dave Jones' Multiprise. I
> can't recall if it was a Multiprise 2000 or 3000.

MP3000-H70. It wasn't very speedy, but it did function. It took close to 40 
minutes to boot to desktop. 

Things were more fun then. Thanks for the trip down memory lane. 

--db
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Future-Watch: Big changes to login and /home directory handling

2020-04-30 Thread David Boyes


On 4/30/20, 10:41 AM, "Linux on 390 Port on behalf of Rick Troth" 
 wrote:

somebody please make it stop

+1. 




--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Nostalgia

2020-02-17 Thread David Boyes
Let's also not forget Eric Thomas's other popular tool: CHAT (so that you 
didn't have to TELL RELAY AT  message. I think of all those bits, only CHAT 
survives in the VM Workshop tapes. 

Still works after all these years.


On 2/14/20, 6:35 PM, "Linux on 390 Port on behalf of Rob van der Heij" 
 wrote:

On Fri, 14 Feb 2020 at 23:50, Neale Ferguson  wrote:

> For those who remember BITNET, RELAY, and VMSHARE. Here’s a video showing
> its resurrection.
>
> https://youtu.be/gsY_m8ufcs4


I recognize that web page at the start :-)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: SMS server on zlinux

2020-01-16 Thread David Boyes
> Is there anyone who has built SMS server in Linux server running on Z/VM ?

Slight variation on the idea: have you considered using something like XMPP aka 
Jabber for this? There are good XMPP clients for most smartphones, and C/Python 
clients and it’s much simpler to implement. 

This article 
(https://feeding.cloud.geek.nz/posts/running-your-own-xmpp-server-debian-ubuntu/)
 is very helpful if this approach would work.



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: SMS server on zlinux

2020-01-16 Thread David Boyes
Ooh, telephony stuff. :)

First: talk this over with your telecom people. It’s possible to do this, but 
you’re gonna need them to do stuff to their gear to make it work.

Second: this deals with analog serial connections for maximum compatibility. 
This is a case where it’s possible to do this, but this task is a LOT easier on 
Intel hardware. There are IP based solutions based on SMMP, but there are a lot 
of messy issues with the handling of that and getting branded as a SMS spammer. 

SMS generally needs some kind of application to manage generating the page, 
some application to manage SMS queuing, and some way to manage the interface to 
the PSTN. All are possible in Z, but see #2 note above.

The tools for submitting the page and queuing the page are included in the 
hylafax package, and the documentation for setting it up works fine, except you 
don’t want to run the auto-setup script when prompted. - you want to add modems 
manually. You can build Asterisk from source or use the packaged one depending 
on how tinkery you want to be.

Talk to the telco people again and get them to enable a SIP trunk port for you 
on their gear  and get the credentials for it. Find a copy of “Trixbox without 
Tears” on the net, and use it to set up Asterisk using your SIP trunk 
credentials and call your telco buddies to prove it works. You can do this 
manually, but the freePBX package makes it a lot more straightforward.

Now you need a package called IAXmodem. This is a software implementation of a 
fax modem, but is also works for SMS. Build and install as documented in the 
tarball. On your Asterisk implementation, create an IAX account for the number 
of outgoing pages you want to support simultaneously and configure IAXmodem 
accordingly.

Last part, the sendpage utility. sendpage is in a subdirectory of the hylafax 
distribution. Read the readme in that dir and configure accordingly. You’ll 
need the phone number of your SMS provider and your organization’s credentials 
for this.

If the stars align, you should be able to use sendpage to SMS message anyone 
you like.

> Is there anyone who has built SMS server in Linux server running on Z/VM ?

Contact me off list if you run into something you can’t solve.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Redhat 8.1 error

2019-09-12 Thread David Boyes
On 9/9/19, 11:49 PM, "Linux on 390 Port on behalf of Jake Anderson" 
 wrote:

ro ramdisk_size=4 cio_ignore=all,!condev"

I should have seen this earlier. Duh. There's your missing quote - at the end 
of the above line. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Redhat 8.1 error

2019-09-10 Thread David Boyes
> On Sep 11, 2019, at 12:04 AM, Jake Anderson  wrote:
> 
> Ok there was an error with the subnet mask and it is going fine, but find
> some message which am not sure from Linux point of view .
> 
> Warnings : can't find installer main page path in .treeinfo
> 
> AnacondaY1787 : raise Value error("No closing quotation")
> AnacondaY1787 : valueError: No closing quotation
> 
> After the above phase it's not proceeding further.
> 

Sounds like the Z install image you have is corrupted. Try getting a fresh copy 
and start over with the install. When you transfer the image, make sure it’s 
transferred in binary mode (the Windows FTP client defaults to text mode unless 
told to switch, so if you used the Windows ftp client, that’s probably what 
happened). 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Redhat 8.1 error

2019-09-10 Thread David Boyes
On 9/10/19, 11:58 AM, "Linux on 390 Port on behalf of Jake Anderson" 
 wrote:

  
Is there anyone who have attempted to install redhat from windows instead
of mounting in Linux server ?
Just wanted to understand your experience ? If this is doable or not ?

You may have more luck installing from a HTTP source. IIS is pretty awful, but 
it should be able to supply the files if prepared correctly. There is a section 
in the docs on what files have to be where and how you need to handle filename 
cases, etc. 

If you can boot your Intel machine from USB or DVD, there are a number of 
bootable Linux images that work well and don't involve installing on the local 
disk of your Intel box. That may prove to be a lot less grief in the long run.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: NSS not possible in SLES 12

2019-09-05 Thread David Boyes
I’m really curious how the embedded systems folks took this latest 
“improvement”. 

By this argument, Intel and ARM systems running from EPROM are no longer 
viable, or at least will require a forklift upgrade - are they expecting to 
always copy the entire kernel into RAM and allow it to modify itself? There’s 
an awful lot of avionics and industrial controls/IoT hardware deployed out 
there that will stop getting updates because it flat out doesn’t have enough 
onboard RAM to support this approach, and that’s the last thing we need: more 
systems we can’t fix when some other dumb error happens. It also opens up an 
entirely new class of exploits possible by interfering with the running kernel 
image or the transfer of the image to RAM. 

This whole approach seems poorly thought out at best, but I guess that is the 
norm for Linux these days. A little Linus vitriol of old seems in order, IMHO. 


 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: SLES 15 - no help?

2019-06-09 Thread David Boyes
On Sun, 9 Jun 2019 at 13:48, Michael MacIsaac  wrote:

 > HUH?  A UNIX with no vi?  NEVER seen that before.
 > -bash: man: command not found

Well, you did say "minimal". Neither of those are necessary to get the system 
multi-user.

Somebody probably intended that particular configuration to be the basis for 
building a custom image for appliance use; if you don't explicitly add it, it 
ain't there for a reason. Makes perfect sense for a system intended to run one 
or two applications from EPROM. 

That kind of setup isn't all that uncommon from the pre-Linux days. BSDI, 
XENIX, HP/UX and SunOS (pre-Solaris) all had a minimum install setup that you 
were supposed to use for embedded or disk-constrained systems. AT 3B2s came 
that way by default (one of the main reasons why AT couldn't give those 
turkeys away).

It'd be interesting to see if the "minimal" configuration is useful for 
creating a system to be run from a DCSS.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: 2FA in the Linux Terminal Server

2019-05-28 Thread David Boyes
On 5/27/19, 10:48 PM, "Linux on 390 Port on behalf of Philipp Kern" 
 wrote:
>Technically the acquired ticket is not two-factor, though. Instead it's
>a bearer token that does not require reauth for the validity of the ticket.

True per se, however the process of acquiring the ticket can mandate multiple 
factors. How the factors are acquired is up to the endpoints. If klogin is 
configured to require 2 credentials (something you have + something you know) 
to acquire the service ticket for the login service (access to the machine), it 
meets some definitions of 2 factor by not issuing a valid service ticket until 
both factors are present. It's also possible to issue single-use tickets 
without a lot of bother across a wide range of platforms without inventing 
wheels. A common configuration is acquiring tickets from two realms (one 
permitting normal renewable tickets, and the other issuing only single-use 
tickets requiring the presence of a physical token to acquire the ticket) and 
configuring the login service on the target machine in question to validate 
both tickets before granting access. PAM makes this pretty easy to do. 

The infrastructure around Kerberos provides for the methods the OP wanted to 
accomplish. It's worth an architectural look-see as the carrier for the overall 
process. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: 2FA in the Linux Terminal Server

2019-05-27 Thread David Boyes
> 
> From my perspective, check the the PAM configuration for the SSH server and 
> the common-auth* PAM configuration files in /etc/pam.d/.  For example, you 
> might have a look at pam-oath which handles OTP tokes for 2FA (never tried 
> that so far).

Consider also investigating using Kerberos logins, which move a lot of the 
issues with centralized policy outside the realm of the endpoints entirely. 
Kerberos is widely used natively (even can be used on desktops and z/OS) and 
does a fine job of eliminating credentials across the wire entirely. 

It’s a bit of a hassle to set up initially, but once it’s working, it’s slick. 
It’d be nice to have the support in VM as well. I’ve been tinkering a bit with 
getting current Kerberos 5 support running for VM (based on updating the old 
Kerberos 4 server code that used to be part of VM TCPIP to current levels), and 
all the Linux distributions already support it.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: iucvconn setup on SLES 12

2019-05-25 Thread David Boyes


On 5/24/19, 9:16 AM, "Linux on 390 Port on behalf of Alan Altmark" 
 wrote:
>While I've always wanted to see it virtualized and the VM telnet server 
>given a way to connect to it (meaning no client/host translations or 
>conversions)

Amen to both. Constructing an analogue to a classic terminal server UI as a VM 
application wouldn't be that hard to do if we set our minds to it. Would be a 
clever use of the RSK toolkit or PIPEs.

> I've never heard of a problem with the HMC ASCII console. 
> What's the issue?

It's not necessarily a problem with the console function per se, but a 
differing set of expectations on how to use it and how it's expected to 
function when presented to a person familiar with the idea of a serial console 
attached to a terminal server as the default behavior. 

It's an unusual setup in that it a) has to be set up within every virtual 
system rather than being the default behavior out of the box (the discrete box 
console/terminal server approach requires no modification to how the target 
system is configured at all, allowing moving between physical and virtual 
environments transparently), b) has been unevenly supported by distribution 
releases over time (in terms of what you have to do being different on 
RH/SuSE/Ubuntu) which has occasionally been a PITA, and c) at various points in 
time it could only be effectively used with one virtual machine at a time. All 
are fixable (with c) being an issue with your HMC ucode level), but they're not 
the out-of-the-box default and it's another gratuitous difference that hostile 
folks use to claim the platform is somehow less appropriate; the fact you can 
accomplish the same goal isn't the same thing as "it can be done the same way 
you manage all your other systems" and it's a lot harder to sell if you have to 
sell a "this is different, so you need to accommodate it" solution to system 
management. 

The Linux-based terminal server is closer to how the other platforms behave, 
and most of the common management solutions Just Work with how it operates 
(with some minor tweaks to UI text and behavior, it's a drop-in; changing the 
prompts to be compatible with the default Cisco/Livingston terminal server 
dialog is a fairly minor step and can be done once in a central place). 
Integrating this with things like Kafka and other mass log/event analysis tools 
is a lot easier, which reduces the cost of operation by allowing more common 
investments to cover more infrastructure. Authentication issues (like the one 
with 2-factor auth recently discussed here) can be completely consistent across 
platforms, and support common solutions that don't require acquiring additional 
commercial tooling. 




--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: iucvconn setup on SLES 12

2019-05-23 Thread David Boyes
On 5/23/19, 11:18 AM, "Linux on 390 Port on behalf of Will, Chris" 
 wrote:
>Is there any advantage to setting up a terminal server 

Yes. Think of it as analogous to attaching the console ports of your discrete 
servers without built-in management processors to a hardware terminal server so 
you can connect to them before networking is working. The original purpose of 
the terminal server code was to deal with the case where you bork the network 
and thus can't do anything without dealing with CMS's occasionally weird antics 
wrt terminal access. It lets you use the editors and environment you're 
familiar with in the Unix world to fix what you broke, without learning ed in 
TTY mode. IBM tried to introduce an HMC feature to provide a character-mode 
console, but it never worked the way most people wanted it to work, so this is 
the result.

> and how is this accomplished?  

Cookbook at http://public.dhe.ibm.com/software/dw/linux390/docu/l4n0ht01.pdf

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: zLinux authentication on windows AD LDAP

2019-03-31 Thread David Boyes
If you’ve been running in NTLM compatibility mode for nigh on 20 years (1999 
was a long time ago), you’ve got much, much bigger headaches to worry about. 
There is a chapter in the document I referenced on what to do with NTLM-based 
authentication sources. Linux is actually a pretty decent AD client and server 
these days now that AD is relatively free of the weird wire protocols - even 
works with some GPO operations, which keeps the Windows folks happy. 

Just out of curiosity, how many pure NetBIOS/LAN Manager systems do you still 
have? They’re about the only thing I can think of that would still care about 
the old way. Anything post-Win9x with service packs should be able to do the 
Kerberos stuff. 

> On Mar 31, 2019, at 6:15 PM, Harder, Pieter  
> wrote:
> 
> Not if you AD is still running in NTLM...

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: zLinux authentication on windows AD LDAP

2019-03-31 Thread David Boyes
> Is it technically possible to authenticate logon with Active Directory LDAP

AD is just LDAP + Kerberos. 

Cookbook for doing this at 
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/windows_integration_guide/introduction.


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Examples of tape access

2019-01-24 Thread David Boyes
There's also a simple open source CMS-based server in the Bacula source tree 
(www.bacula.org) that handles mount/dismount and tape attaches runnable from 
Linux guests. Might work for your purposes; Adam and I wrote it ages back for a 
purpose like this. I believe it's in a directory called 'extras/VM'.





--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: question on SWAPGEN preferences

2018-12-29 Thread David Boyes
On 12/27/18, 11:16 AM, "Linux on 390 Port on behalf of Rob van der Heij" 
 wrote:

> I don’t see how specifying the size should make a difference, apart from
> the last odd blocks when an arbitrary size does not make a full number of
> cylinders.

Apparently there are still a fair number of people who care about SWAPGEN (who 
knew?) mostly to deal with creating swap on VDISKs, so I'm trying to address 
the stuff I've had queued up on the round tuit list. It's been a few years 
since the last release, so the list is somewhat longer, but I can make educated 
guesses for many of those. 

Reason for the size/units question was to make it friendlier for non-CMS 
literate people who don't (and don’t want to) understand the underlying disk 
geometries and just want to say "give me a 18G swap disk *there*" and have it 
automagically sort out what it needs to do to make that happen -- purely a 
usability issue. I agree that for real 3390/FBA swap disk, setting size in 
SWAPGEN is irrelevant, but creating swap space for that use case is persistent 
across logons, so it's less important to me. 

The BLKSIZE option was requested by someone a while back to try to align it 
with more efficient use of space with larger memory page sizes to try to better 
align the # of I/Os needed to move a complete frame in and out. Future item 
(IIRC, Alan was talking about it as a fairly distant need), but I've got some 
spare time to think about it at the moment. 

Figured I'd ask and see what people want, so there it is. If nobody cares, then 
I won't expend any more effort on it. 




--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


question on SWAPGEN preferences

2018-12-27 Thread David Boyes
I'm trying to tackle some of the backlogged nits with SWAPGEN, and wanted to 
get opinions on a couple of changes.



  1.  Right now, the size of the swap disk is specified in # of blocks. Would 
it be valuable to be able to specify this in megabytes/gigabytes and let 
SWAPGEN worry about the geometry issues needed to get that amount of space? For 
backward compatibility, the default would remain # of blocks, but adding a new 
option to interpret the value as meg/gig wouldn’t break anything, eg:

SWAPGEN 300 1048576 would be treated as allocate 1048576 blocks. This would be 
the default.

SWAPGEN 300 8G ( UNITS SIZE would look at the geometry of the storage 
requested, figure out how many blocks would be needed (at the desired block 
size) to get that amount of space, and them do the deed. New option values: 
UNITS SIZE indicates by size, UNITS BLKS to get the # of blocks, default UNITS 
BLKS.

  2.  Add a BLKSIZE option to specify size of blocks (512,1024, 4096) for more 
efficient use of space.  Default would remain 512 (as now).



Thoughts?

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: I checked that it's not April 1

2018-10-29 Thread David Boyes
On 10/29/18, 11:37 AM, "Linux on 390 Port on behalf of Mark Post" 
 wrote:
> I don't think that matches the reality of the market place, however, 
> considering that Red Hat's market share with mainframe customers has been 
> _far_ less than 50%.

If you limit it to Z, true. I was thinking of the larger picture on all 
architectures. Procurement people don't like extra work, and if they can extend 
an existing agreement with a little work vs a whole new vendor, they'll go with 
the existing agreement.  IBM is a good example of that. 


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: I checked that it's not April 1

2018-10-29 Thread David Boyes
On 10/28/18, 6:42 PM, "Linux on 390 Port on behalf of Harder, Pieter" 
 wrote:
> In the past IBM has been extremely reluctant to outright own a Linux distro. 
> Likely for fear of alienating the Linux people by corporate behaviour. Why 
> now? 

My guess is that Red Hat probably is the most culturally compatible of the 
Linux distributors to IBM, and Red Hat's customer base overlaps the traditional 
IBM customer base to a great degree, so the retraining of salespeople would be 
minimal. It would be a smaller task to adapt IBM sales to the enterprise 
engagement and license model that Red Hat has traditionally used, and most of 
those traditional IBM customers preferred Red Hat because they didn't have to 
think about it too much because it was like all their other software license 
agreements.  It's actually quite clever in that it extends the old, profitable 
model of enterprise hardware support and services that keeps the lights on at 
IBM and still buys them a seat at the cloud fantasy table with the kind of big 
guns IBM is used to having. Gives them a whole bunch of new customers to mine, 
plus a place back at the strategic center of a lot of old ones. HPE's hardware 
division would be another smart acquisition, particularly the Tandem stuff, and 
all of the VMS bits and bobs. There's a lot of possibility with something like 
zVMS. Hmm

If you think about it, IBM's been trying to get credibility in the cloud market 
for a long time, and has pretty much failed to do so at most turns. If you 
can't beat em, buy them. Presto, cloud success, and all the higher-ups clock 
their bonuses. Wash, rinse, repeat.

This finally relieves the development side of IBM from having to reinvent stuff 
that's already been done in the Linux world for ages, in a way that will 
maintain the legal fiction of control that their whole licensing model is based 
on. A lot of progress can be made quickly in areas where we don't have to try 
to bolt useful stuff onto the CMS and z/OS ways of doing things, and to get 
simple things like multitasking and proper process control.  I'd expect a lot 
of the constraints on Linux-based appliances for z/VM will be lifted, although 
the 7 dwarves case may still restrict them to shipping assemble-it-yourself 
kits so that no one can claim a shutout of the other distributors. IBM can ship 
a minimal environment that works for the purpose, and if you have a full Linux 
distribution, goodie for you - kind of like GCS.

I bet there is great rubbing of hands in Somers this evening - no new wheels 
invented; see the new world, same as the old world.


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LDAP on z/Linux: Anyone hosting a LDAP server on z/Linux?

2018-07-24 Thread David Boyes
On 7/24/18, 9:33 AM, "Linux on 390 Port on behalf of Brimacomb, Brent (TPF)" 
 wrote:
> Anyone hosting a LDAP server on z/Linux?Assume you're running OpenLDAP? 

Yes and yes. Same as all our other Linux platforms in order to not confuse the 
mundanes. Everything's in the same places and it just works.

>  What, if any, GUI are you using for admin?

Depends on the use. If you're using it to back up a Samba 4 implementation, the 
ones supplied with Windows domain management services work fine, as do the 
Apple OpenDirectory tools. Applications running their own interfaces work just 
as they do elsewhere. We mostly use the line mode commands, but we're cavemen 
like that. 

> Other gotcha's we should be aware of?

Other than defusing their instinctive whining about no hardware for them to 
touch, it's exactly like any other OpenLDAP implementation. It's the same code 
and you plan and engineer for it in the exact same way. 


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: dasdfmt assistance

2018-07-09 Thread David Boyes
In passing, I think you said this was a minidisk that was 1 cylinder short. Is 
cyl 0 the missing cyl, and the rest of the disk from 1-end the location of the 
minidisk? If so, there’s no label on the minidisk part to preserve- you need to 
supply one. The minidisk can’t see the real volume label in cyl 0 .

> 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Any success stories using EMC DLm's with z/VM and Linux?

2018-06-28 Thread David Boyes
Haven’t done it myself, so just speculating, but:

Do you have a tape manager, like VM:Tape? Standalone it’s pretty grim, but it 
might work better if you were able to manage it via that route. The CA folks 
are used to weird tape configurations, and since they go through the DFSMSrmm 
interface to get media moved, it would seem that it would work in that you 
could get the device working via RMM as a different library name and then go 
through the process in the VM:Tape guide for mass moves to different type of 
media. 

It’s more of a PITA for unlabeled tapes, but you’d have to do those by hand 
anyway. Any of those VSS thingies really really don’t like unlabeled volumes.



> 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Idea: Using SCRT to report on Linux usage; maybe a way to reduce the entry level cost for Linux on Z?

2018-04-15 Thread David Boyes
On 4/13/18, 3:22 PM, "Linux on 390 Port on behalf of Gibney, Dave" 
 wrote:

> I FIND THIS DISCUSSION TROUBLING. It will not likely ever affect me of my 
> installation, has we haven't (and unfortunately are not likely to) used 
> zLinux and Z/VM. 

The idea spawned from a lunchtime discussion about how to reduce the entry cost 
sticker shock for Linux on Z; the PHBs have been conditioned to think that 
Linux = minimal/zero cost, and when they see the price tag for $X thousand for 
a Linux distribution plus the cost of z/VM plus the cost of the hardware, it 
turns them off the platform (Quote: "if that's going to cost us > $50K to try 
to do the same thing that we can already do with a spare PC we're buying anyway 
for other purposes for nothing, why would we want to do that?" Wrong, but it 
passes as rational thinking in PHBworld). If you can spin the discussion as 
"start small, grow quickly without having to do purchase orders every 5 
minutes" and/or "pay only for what you use while still getting the QoS and 
support you expect from the mainframe", they seem to like that framing more.  
From there it was a "how do we do this with the minimum amount of work, 
preferably none, by reusing something that already exists in a creative way" 
idea. 

> But, is the z/OS MIPS/MSU pricing model (IMO, one of the major drags on the 
> platform) really being extended into this arena. 

I agree it's not ideal, but it's one that most people with Z hardware already 
understand and that we don't have to argue about having different tools to look 
at usage for different OSes. It also has the advantage of neatly integrating 
with how IBM already thinks about some kinds of pricing, which makes it easier 
to sell to 3rd party vendors as "use something that already exists instead of 
inventing yet another unique weird gadget to do this". It also has the 
advantage of the whole picture in one place rather than chasing it down all 
over the place. 

My main concern with the tool Tim mentioned is how closely is it tied to the 
whole BigFix tool ecosystem? SCRT doesn't seem to require any external 
dependency stuff to work (other than a working Java interpreter), and a quick 
look at the docs appear to show that the other tool seems to bring in a whole 
bunch of other dependencies, some of which are priced. Is that the case?

Sounds like general consensus is that it's not a great idea. It's worth having 
the discussion, though. 




--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Idea: Using SCRT to report on Linux usage; maybe a way to reduce the entry level cost for Linux on Z?

2018-04-11 Thread David Boyes
Given that IBM is now allowing 3rd party vendors to use the SCRT processing 
infrastructure to collect usage data, the thought occurred to me: could this be 
used to do usage-based pricing for Linux and Linux-based applications? Some 
mapping of Linux features/functions to SMF type 70 and 89 records would have to 
be done, and the various distributors would need to register application types, 
but all the other infrastructure is there and the usage data reporting piece 
already exists for Linux (it's a Java app). 

The idea here is that if the distributors could get accurate usage data, they 
could offer usage-based pricing, which would lower the entry level for getting 
started with Linux on Z and avoid some of the sticker shock. 

Thoughts?




--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Linux/390 reference in XKCD -- we've arrived.

2018-02-19 Thread David Boyes
We've cracked the mainstream media.

https://imgs.xkcd.com/comics/2018_cve_list.png

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Interrupt affinity cannot be set on Mellanox card

2017-11-16 Thread David Boyes
> Recently we're testing Mellanox 10GbE performance with Ubuntu 17.04 s390x
> on z13.  During the test, we found that interrupt affinity cannot be set
> like other platforms.

I/O-related hacks for other platforms are unlikely to work in the same way on 
this hardware; the underlying I/O subsystem on this hardware is enough 
different that you’re probably not going to get the results you want in the 
same way (a lot of the things are handled by the I/O system and not accessible 
to you). 

Before we go off into the weeds, what is the problem you’re trying to address? 
That may provide us with some clues on different ways to accomplish what you 
want.




Re: OT: Is there a setting that can prevent trash in the LINUX-390 archives?

2017-06-10 Thread David Boyes
> I suspect the solution is not going to be on the LISTSERV side, but on each 
> poster's client.



Yeah, that's what I finally concluded as well.



Apparently, Microsoft has removed the option to set the message format sent on 
a per-contact basis in Outlook 2016, so the solution that used to work (create 
a contact, set mail format for that contact to text) now no longer does. I was 
hoping there would be one of those obscure parameters to SET  DIGEST 
that would prefer the plain text alternative, but I guess the digest code was 
written long enough ago that mail clients have just passed it by.



> Oh how I miss MAILBOOK. I had some great REXX macros for handling MIME.



Well, it's still available... 8-).

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


FW: Is there a setting that can prevent trash in the LINUX-390 archives?

2017-06-09 Thread David Boyes
Let’s try that again. Plain text version.

On 6/8/17, 11:05 AM, "David Boyes" <dbo...@sinenomine.net> wrote:

This is a tangent, but more and more postings to this and other lists 
appear like this in digest mode:

Date:Wed, 7 Jun 2017 08:53:15 +
From:Tore Agblad <tore.agb...@hcl.com>
Subject: Re: Anyone running IBM BigFix client on z?

SGksIHdlIGFsc28gcnVuIGludG8gdGhhdCBwcm9ibGVtLg0KV2UgaGFkIG9uZSAnSUxNVC1ndXkn
IGhlcmUgdGhhdCBmaXhlZCB0aGUgY29uZmlnIGZvciBzMzkweCBzZXJ2ZXJzLg0KU28gd2UgYXJl
IGRvd24gdG8gMC4yIC0gMC4zICUgKGlmIEkgcmVtZW1iZXIgY29ycmVjdGx5KQ0KDQovVG9yZQ0K
DQpUb3JlIEFnYmxhZA0KSW5mcmFzdHJ1Y3R1cmUgQXJjaGl0ZWN0IOKAkyBNYWluZnJhbWUgek9w
ZW4NCkhDTCBUZWNobm9sb2dpZXMgTHRkLg0KREExUw0KR3VubmFyIEVuZ2VsbGF1cyB2w6RnIDMs
IDQxOCA3OCBHb3RoZW5idXJnLCBTd2VkZW4gDQpEaXJlY3Q6ICs0NiAzMSAzMjMzNTY5DQpNb2I6
[… snip …]

Is there some setting that we can recommend to posters to prevent this and 
make digest mode useful again? 
It’s very difficult to follow a conversation when half of it is obscured. 

(Not picking on you, Tore – this happens with a lot (and increasing number) 
of postings.)






OT: Is there a setting that can prevent trash in the LINUX-390 archives?

2017-06-08 Thread David Boyes
This is a tangent, but more and more postings to this and other lists appear 
like this in digest mode:

Date:Wed, 7 Jun 2017 08:53:15 +
From:Tore Agblad 
Subject: Re: Anyone running IBM BigFix client on z?

SGksIHdlIGFsc28gcnVuIGludG8gdGhhdCBwcm9ibGVtLg0KV2UgaGFkIG9uZSAnSUxNVC1ndXkn
IGhlcmUgdGhhdCBmaXhlZCB0aGUgY29uZmlnIGZvciBzMzkweCBzZXJ2ZXJzLg0KU28gd2UgYXJl
IGRvd24gdG8gMC4yIC0gMC4zICUgKGlmIEkgcmVtZW1iZXIgY29ycmVjdGx5KQ0KDQovVG9yZQ0K
DQpUb3JlIEFnYmxhZA0KSW5mcmFzdHJ1Y3R1cmUgQXJjaGl0ZWN0IOKAkyBNYWluZnJhbWUgek9w
ZW4NCkhDTCBUZWNobm9sb2dpZXMgTHRkLg0KREExUw0KR3VubmFyIEVuZ2VsbGF1cyB2w6RnIDMs
IDQxOCA3OCBHb3RoZW5idXJnLCBTd2VkZW4gDQpEaXJlY3Q6ICs0NiAzMSAzMjMzNTY5DQpNb2I6
[… snip …]

Is there some setting that we can recommend to posters to prevent this and make 
digest mode useful again? 
It’s very difficult to follow a conversation when half of it is obscured. 

(Not picking on you, Tore – this happens with a lot (and increasing number) of 
postings.)



Oracle on VM

2017-02-18 Thread David Boyes
> From:Timothy Sipples 
>To be clear, I'm not asserting that my idea is "useful." I'm just answering
> the question, that's all. The range of new use cases for Oracle Database
>10g R2 for z/OS on z/VM is likely to be extremely limited at best,
>especially given that Oracle Database 12c for Linux on z/VM is available.
>I'm still not sure why "pretend Linux doesn't exist..." is part of the
>need/desire/curiosity. z/OS and Linux both exist, and thrive.

I would tend to agree that a CMS version of Oracle is probably a no-go. I think 
the fundamental issue is that the Linux version is still fairly unaware of the 
VM environment, and the controls supplied for Linux VMs compared to CMS or even 
z/OS in VMs are still pretty primitive. Evolving, but limited as of yet. I 
suspect that Docker will revisit the business case for instrumentation in the 
mainstream world, and we’ll eventually get to VM/SP-level controls by z/VM’s 
60th birthday. 8-)

>Last I checked, zNALC z/OS with a reasonable set of optional z/OS elements
>had/has a U.S. commercial price starting at about $125/month, including
>standard IBM Support (24x7 Severity 1).

Would you buy a toaster that cost you $1500 a year to own? Plus the Oracle 
license? Thought not.

> For prospective OEMs, I don't know,
>but give your friendly IBM representative a call if you'd like to explore
>something.

Given that you and Alan are the closest thing we’ve seen to an IBM rep in the 
last 10 years that wasn’t a printer repairperson, it might be a bit challenging 
to actually find one … but, been there repeatedly, done that repeatedly, have 
all the t-shirts, and made IBM a metric boatload of money in the process. If 
someone wants an appliance, call Neale. He’s in charge of gratuitous miracles 
this month.  (










kimche on s390x

2017-02-16 Thread David Boyes
> I do not know if Kimchi can be deployed onto Ubuntu for s390x

> http://kimchi-project.github.io/kimchi/ it looks kind of cool.

If you build from source, it seems to work (they don't know what s390x is, so 
no packages). I'd probably agree with Rick, though -- you end up ignoring most 
of the good stuff in the hardware if you only run Linux-based virtualization 
management tools. Learning z/VM isn't that hard, even for the millennials, once 
you get past the "why?" stage.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Oracle on VM

2017-02-16 Thread David Boyes
> I gotta say that the option Tim Sipples, proposed of running Oracle in a=20

> zOS guest under VM is a bit more practical than running Oracle 7, I just=20

> find it fascinating that Oracle appears to have abandoned VM, but not MVS.



Oracle had (and I suppose, still have) some large customers on z/OS, and that's 
DB/2's sweet spot. Being able to stick it to their biggest competitor is always 
a plus, and the enterprise agreements that were in place at the time with a lot 
of those customers for all platforms made Oracle much more attractive back 
then. If the OpenSolaris thing had worked out, we'd be in a very different 
place today. DB/2 VM's "poor stepchild" status really made it viable only as a 
VSAM replacement when IBM took CMS VSAM support out behind the barn, at least 
for the CMS compilers that still existed at the time.



Tim's idea would be useful if z/OSe was still actively marketed by IBM -- this 
was really exactly the kind of thing it was meant to do. I don't think IBM ever 
really got that message across to the z/OS customer base, though -- that was 
back in the "LPAR uber Alles" for z/OS virtualization days, and IBM (with some 
help) has bought a clue on VM and running production guest operating systems 
since then.



A full z/OS license at current prices just for creating appliances would be 
difficult to make work in a cost-effective manner, even if you stuck to 
IBM-only software. There's a lot of moving parts, and Oracle prices on z/OS 
reflect the "normal" z/OS marketplace pricing levels. It wouldn't be hard to do 
(would probably take us a couple weeks to do it), but it would be tough to make 
it worth someone's while to create and support it with no contractual backing 
from Oracle or IBM.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Oracle under z/VM without Linux?

2017-02-15 Thread David Boyes
> Am I dreaming to assume that Oracle would actually=20 support 7 on a current 
> z/VM?

Probably not. I'm sure if you a) managed to find a copy, and b) threw large 
bales of cash at them, they'd find a way, but Oracle 7 was long, long ago. I 
doubt any modern application would be able to connect to it, and all their VM 
talent has long ago gotten other jobs and moved on. I doubt they even have a 
way to generate a license key for it anymore, but you'd have to ask them.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Oracle under z/VM without Linux?

2017-02-14 Thread David Boyes
> I realize that this may be a genuinely stupid question, but is it possible
> to run Oracle directly under IBM z/VM like you can with DB2?

Not anymore. The last version of Oracle to run as a CMS application was Oracle 
7. Current versions of Oracle on z/VM all require a Linux guest.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Problem tracking system

2016-09-20 Thread David Boyes
Second the recommendation for RT if you want simple and Linux based. It's 
fairly flexible, comes with useful defaults, and is both mail and web friendly 
(for those of us who don't spend our whole day buried in a browser). GNATS is 
more programmming-oriented; it works, but it works best when dealing  with 
programming issues. OTRS is another more complex option; has CMDB support and a 
lot of built-in pieces of ITIL framework, but might be more suitable for larger 
organizations. 

If for some reason you want a CMS-based one, PROBLEM is well documented and 
just works. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: swapgen and rhel 7

2016-05-17 Thread David Boyes
I think we dealt with this already. Check the help files; toward the end of the 
help file there are some new options to deal with the new swap signature.

Otherwise, let's take this off list; don't need to bore everyone with the 
debugging details.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Interesting development: Ubuntu user space on FreeBSD kernel

2016-03-22 Thread David Boyes
From The Register:


True believers mind-meld FreeBSD with Ubuntu to burn systemd
 'UbuntuBSD' promises the best of several possible bootloading
 worlds
 http://go.reg.cx/tdml/5c138/5719697f/0227c57e/2kjP

Interesting to see if the System z port appears.


KMCSL available in VM Workshop 2015 tools tape.

2015-07-20 Thread David Boyes
KMCSL, a CSL library for ciphering data using the KM, KMC, MMF, MNO, and KMCTR, 
has been added to the VM Workshop 2015 tools tape.
Example code using REXX and PL/I is included.

http://www.vmworkshop.org/node/472

Thanks to Dave Jones for the submission.



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


New contribution to 2015 tools tape

2015-07-17 Thread David Boyes
Mike McIsaac's contributions to this year's VM Workshop tools tape are 
available at http://www.vmworkshop.org/node/471.

Several assorted utilities:

*CHPW630.XEDIT - An XEDIT macro to change passwords in a z/VM 6.3 USER 
DIRECT file
*CPFORMAT.EXEC - Wrapper around CPFMTXA to format a series of volumes
*GREP.EXEC - Similar to Linux grep
*RM.EXEC - Wrapper around ERASE that allows wildcards
*SSICMD.EXEC - Run a CP command on all SSI members in the cluster
*WC.EXEC - Similar to Linux 'word count'

If you have any tools you'd like to make available to others, please use the 
form at http://www.vmworkshop.org/tools/submit-a-tool


--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: bacula vs amanda

2015-07-13 Thread David Boyes
  In 2008, I looked at several options to backup the machines on my home
  network.  I settled on bacula, because at the time it had the most
  options and was the easiest to configure(from my point of view).
  Since that time, I've been backing up Linux clients and servers,
  Windows servers and OSX clients with few issues.  I run the director
  on a Linux server that hosts a RAID5 array for storage.

To add to this, Amanda assumes that a backup for a single host will fit on one 
tape. Given the small size of a 359x tape, this assumption usually fails in the 
390 environment. Bacula can span multiple tapes. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


White paper on creating a SFS server

2015-07-10 Thread David Boyes
I was in the middle of something else and had to transfer a couple 3390 disk 
images to another system. By the time I dumped them with DDR and VMARCed them, 
they were just a little bit too large to transfer using FTP - the system has 
only 3390 mod 3s, and the files were too big for one volume. Create a SFS 
server, glue a few 3390 mod 3s together, problem solved... but I started 
thinking that maybe someone else might have to do this and need a cookbook to 
do so.

So, quick and dirty white paper. It's posted on vmworkshop.org in txt and pdf 
versions:

http://www.vmworkshop.org/node/469

Enjoy.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


VM Workshop tools tape collection complete (1985-2015)

2015-07-01 Thread David Boyes
I finally got a few minutes to complete the VM Workshop tools tapes collection. 
All the still-readable VM Workshop tapes from 1985 to current are now online 
(note the gap from 1998 to 2012 - no VM Workshops were held from 1998 to 2011, 
and the 2011 VM Workshop at Ohio State did not produce a tape).

http://www.vmworkshop.org/tools

Browse and enjoy.

== db



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


semi-privileged mode

2015-06-04 Thread David Boyes
 (recall the initial UNIX model had ri= ngs of privileges or was that just 
 Dante and the Seven levels of hell?)

No, that was MULTICS. UNIX V6 and earlier always only had 1 privilege flag 
(superuser/general user) due to hardware I/D protection limitations on early 
model PDPs (pre-11), and we're still stuck with it decades later. The CTSS 
(later DEC's) PL/1 compiler also still sucks, lo these many years later -- I 
blame much of Unix on that fact. 

Now, MULTICS -- *that* had granular privileges; record level access control in 
some cases. I have an emulated Honeywell 680 system with a bootable Multics 
cloned from dockmaster.af.mil's boot packs (one of the last two production 
Multics systems -- the other one was at Credit Suisse, I think) years ago if 
you want to try it out. 

Still has the cryptic hodie natus frater est comment in the disk formatter 
code. 8-)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


LXFMT 2.3 Available

2015-04-29 Thread David Boyes
Thanks to Perry Ruiter and several testers, version 2.3 of LXFMT is available 
for download. This version is a substantial rework of how disk geometry 
handling and volume formatting on occurs, which should adapt better to new DASD 
types and be a bit more resilient to odds-and-ends that used to break things in 
unpredictable ways.

If you haven't encountered LXFMT before, the tool provides a way of preparing 
minidisks in the CMS environment for use by Linux. It does the equivalent of 
dasdfmt, and is callable by CMS tools like DIRMAINT and other directory 
managers as an alternative disk formatting utility.

LXFMT23.VMARC is available at 
http://download.sinenomine.net/lxfmt/LXFMT23.VMARC You will need to do the 
usual PIPE to reblock it after transferring the VMARC to your VM system.

If you have problems, contact me off list.



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


DR backup support.

2015-04-29 Thread David Boyes
 What is the preferred method for backing up and restoring Linux for Disaster 
 Recovery purposes.



You have to do it in two stages.



Suspend/stop and flashcopy doesn't work reliably/cleanly because Linux caches 
the heck out of stuff, so what's on disk at any given second is NOT the current 
state of the filesystem - you could get lucky, but I wouldn't bet anything 
important on a flashcopy of a running system. All the data just isn't on the 
disks yet.  If you want a guaranteed clean backup, you have to do a backup from 
within the virtual machine, or shut down the guest completely, LOG IT OFF,  and 
do the image backup - or (recommended) a combination of the two.



What we recommend is to set up a Linux backup tool like Amanda or Bacula (or 
your fave commercial tool) in the guests to back up to a dedicated virtual 
machine used only as a backup server. Do regular file-level backups to the 
backup server machine from each guest, then shut down the backup server and do 
image backups of the backup server using your regular VM backup tool. During 
your regular maintenance windows, arrange to shut down the production Linux 
guests, log them off, and do image backups of each Linux server periodically 
using your VM backup tool.



In a recovery, restore the base VM system, then restore the image backups of 
the Linux servers using the VM backup tool, including the backup server. Bring 
up the backup server. You then bring up each guest and restore the most current 
file-level backup from the backup server. That leaves you reliably with guests 
current as of the most recent backup - because you did the file-level backup 
from within the virtual machine, you got the actual filesystem state, not just 
what was on the platters at the time.



Obviously, if the guest is providing some critical service that can't be 
interrupted, you need to cluster it with some kind of HA (either the SLES HA 
thing, or our HAO for RHEL), and do the periodic image on each node 
individually. The periodic outages can be a pain to schedule, but this method 
works reliably without the risk of filesystem corruption.





--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


IBM to share technology with China

2015-03-24 Thread David Boyes
 I'm not sure I like this or not
 http://finance.yahoo.com/news/ibm-share-technology-china-strategy-
 120007774.html

It's unavoidable -- welcome to the post-NSA spying disclosure world.  If IBM 
wants to continue to do business in the world's largest market, they have to do 
it with Chinese workers. They still have huge skill gaps, but at least there's 
a framework to actually get access to that market if there's a local Chinese 
component. 

Nothing new to see here. Same deal with Brazil, same deal with Russia, same 
deal with India. They don't impose import duties or anything that would be 
actionable at an international treaty level, but having to have a reasonable 
local involvement percentage looks ok (on paper) ... and you get free skills 
transfer because whoever does the work has to be a) local or b) looking over 
your shoulder the entire time. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


NTP needed

2015-03-20 Thread David Boyes
 Leap seconds become important when you start reaching back in the time. If
 you reach past the most recent leap second insertion point, the wall clock
 or TOD clock conversations start being off by one second per insertion.
 For many things, that's close enough.  For others (e.g. financial
 institutions) the time standards are established by regulatory agencies.

Also important if your authentication protocols include a time nonce (eg, 
Kerberos/AD). In some instances, a time variance of one second is enough to 
cause decryption of tickets. 

YMMV, but you may need to consider such things. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LINUX-390 Digest - 22 Jan 2015 to 23 Jan 2015 (#2015-14)

2015-01-24 Thread David Boyes
 I am in the middle of discussion about how to package and install software =
 on Linux for System z. There are people new to Linux involved and things li=
 ke InstallAnywhere are coming up. What is your experience with non-RPM
 inst= allers?

In a phrase: utterly unacceptable for commercial software

RPM is a ugly hack, but circumventing the platform software management system 
-- crummy as it is -- is a dealbreaker, especially for any application you 
expect to charge money for. It makes it difficult to survey the system, 
determine if a system is at risk and do license management in any intelligent 
automated way. You also lose all the signature validation function and 
dependency management that RPM and yum does. 

I also want the applications I deploy to explicitly specify their environmental 
dependencies in a programmatic way so I can install only what is actually 
needed, not schlep everything+dog in an opaque blob. All the alternate 
installers do a miserable job of that compared with properly constructed 
repositories and yum. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LINUX-390 Digest - 9 Jan 2015 to 12 Jan 2015 (#2015-5)

2015-01-13 Thread David Boyes
 I now want to install some tools from the VM downloads web pages. Is there
 a=  standard or commonly used convention for where such tools should be
 stored?=  I found some sample instructions for installing pipeddr where the
 code was i= nstalled on the MAINT 191 for instance. I would not expect that
 to be a suit= able location.

It is absolutely *not* suitable. For your own sanity, follow Mother's Third 
Law: Never mix your stuff with IBM stuff. 

If you're running a pure minidisk system, one convention that works well is to 
build a TOOLS id and put the code for the contributed stuff on minidisks 
attached to the TOOLS id, one minidisk per tool. You then modify SYSPROF EXEC 
to automatically access a minidisk -- say TOOLS 19F -- that has EXECs on it 
that VMLINKs the appropriate minidisk, runs the tool, and releases the minidisk 
when it's done.   That way all the files for a tool stay together, they don't 
interfere with IBM-supplied stuff, and when you upgrade, you have only one 
thing to remember to carry forward. (the change to SYSPROF EXEC).  In a SSI, 
you can put the minidisks on shared DASD, and everybody in the SSI uses the 
same stuff. 

If you're SFS-friendly, the same trick works with a SFS pool -- we create a new 
one called TOOLS:, and the same TOOLS userid convention. Since it's not one of 
the IBM 'magic' ones, it's immediately available on all nodes in a SSI world, 
and you can just ACCESS TOOLS:TOOLS. fm from anywhere and you're set.  AVOID 
USING THE IBM SFS POOLS FOR THIS. You don't want to have to mess with migrating 
it on upgrade, and the VMSYSx: ones are not easily sharable in a SSI world. 

 Also, does IBM provide a list of various MAINT minidisks and their functions=
 ?

IBM doesn't (AFAIK), but Dave Jones has done all the homework and provided such 
an animal. Look through the archives of recent postings to this list to find 
it. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Anyone else see these messages when building from source on RHEL?

2014-11-19 Thread David Boyes
When building some packages from source with yum-builddep, we are seeing the 
following more frequently:


 warning: bogus date in %changelog: Sat Aug 10 2014 Patsy Franklin

 pfran...@redhat.commailto:pfran...@redhat.com - 2014f-1

 warning: bogus date in %changelog: Thu May 28 2014 Patsy Franklin

 pfran...@redhat.commailto:pfran...@redhat.com - 2014d-1

 warning: bogus date in %changelog: Sat Aug 10 2014 Patsy Franklin

 pfran...@redhat.commailto:pfran...@redhat.com - 2014f-1

 warning: bogus date in %changelog: Thu May 28 2014 Patsy Franklin

 pfran...@redhat.commailto:pfran...@redhat.com - 2014d-1

Seems to be the same person in all the cases I've seen. Just a training issue, 
or ?



--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Location for SWAPGEN

2014-11-14 Thread David Boyes
 From:Chu, Raymond raymond@pseg.com
 Subject: Looking for a site name that has the following code such as
 swapgen.exec
 I am looking for the site that I can download such as callsm1.exec, cpfor=
 mat.exec and ssicmd.exec to maint  and profile.exec, rhel64.exec, sample.=
 conf-rh6, sample.parm-rh6, sample.parm-s11, sles11s3.exec and
 swapgen.exe=
 c to lnxmaint. Please advise what the site name is.

Can't speak for the other files, but the canonical location for the current 
copy of SWAPGEN is:

http://www.sinenomine.net/products/vm/swapgen

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Update to VM Workshop Archive Pages - clickable link to access

2014-11-14 Thread David Boyes
Several folks commented that in these days of WWW-focused access methods, a lot 
of new folks don't know what anonymous FTP is. In the interest of easier 
access, the tools archive pages now have a link added to open an anonymous FTP 
session to the respective file directories from the page for each year.

Have fun.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Final Snapshot of SRU.EDU FTP Site added to vmworkshop.org

2014-11-10 Thread David Boyes
With Fran Hensler's permission, I've added a final snapshot of the zvm.sru.edu 
FTP site that Fran's maintained over the years to vmworkshop.org.
The files are available by anonymous FTP from vmworkshop.org, directory 'sru'.

Best wishes on your retirement, Fran. Somewhere, there will always be a 
Slippery Rock. 8-)




--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


VM Workshop Archive Update (1985-1998)

2014-11-09 Thread David Boyes
With a great deal of help from Dave Elbon, Mike Walter, George Shedlock, Dave 
Jones and a lot of creative procrastination to avoid doing the stuff I'm 
*supposed* to be working on, the VM Workshop tools tape archive now contains 
the VM Workshop, Waterloo, and PC SIG tools tapes from 1985 up to the last 
physical tape produced for the 1998 VM Workshop (the last session before things 
went on hiatus). 1990 and 1993 are still in process, but I hope to have them up 
soon. I have some older tapes still to process, but haven't put a lot of 
priority on them as most of the code probably won't run on a modern z/VM. Tools 
from the 2011 and subsequent VM Workshops are posted directly on the WWW site, 
http://www.vmworkshop.org/tools

The archive consists of VMARC files where I was able to extract the files 
easily, and AWStape tape images of the tapes where I didn't have time to 
extract the individual files. The files are available via anonymous FTP from 
vmworkshop.org (userid anonymous, use your email address as password). 
Transfer them in binary mode and use the usual pipe to reblock them before 
extracting with VMARC.

Most of the files have text read-me files with them; the later files have 
HTML read-me files that describe the contents.

Enjoy.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


FYI: Update to HAO pricing

2014-09-04 Thread David Boyes
I've just posted new information for SNA's high-availability option (HAO) for 
RHEL on System z to http://download.sinenomine.net/publications/hao
If interested, please contact me off-list.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LINUX-390 Digest - 12 Jul 2014 to 13 Jul 2014 (#2014-122)

2014-07-14 Thread David Boyes
 From:Alan Altmark alan_altm...@us.ibm.com
 Subject: openssl CA certificate maintenance
 
 I (think I) know that openSSL provides two ways to manage certificates:
 1.  A single PEM file that has all of your CA certificates in it.  I say 
 single as a
 matter of practice.
 2.  A single directory that contains all of the certificates stored in 
 separate
 PEM files.  You use the c_rehash utility each time you add or delete a
 certificate to/from the directory.
 I'm curious as to which way most people do it, and why.

Whenever possible, option 2. Some applications that try to be smart about 
certificates don't like this approach, but those seem to be getting rarer 
(yay). 

Option 1 has a high probability of human error, and if you break one, you break 
them all. It's also kind of a pain to determine what certs are installed where. 

Option 2 permits easily distributing and installing certificates using RPMs, 
which makes updating them (or removing them) a snap. It's also a lot easier to 
make sure that any necessary intermediate certificates get pulled in (package 
dependencies + something like yum work a treat) and it's super easy to know 
which systems are affected if a cert is compromised (rpm -qa |grep 
local-cert-x). It also makes it trivial to automate the c_rehash run in a 
post-install script so you don't ever forget to do it. 

It's a little more work to set up certificate distribution that way the first 
time, but it's worth it. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LINUX-390 Digest - 10 Jul 2014 to 11 Jul 2014 (#2014-120)

2014-07-13 Thread David Boyes
 But it's the question itself that disturbs me.  What would software 
 (as opposed to a license agreement) do with explicit information about 
 CPU type?  

First guess would be to permit the software to enforce a license agreement. The 
honor system is no longer reliable  (and never was) in some parts of the world. 
 Unlikely to be a capacity question, but a configuration query.  

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LINUX-390 Digest - 11 Jul 2014 to 12 Jul 2014 (#2014-121)

2014-07-13 Thread David Boyes
 From:Mike Shorkend mike.shork...@gmail.com
 Subject: Re: Running on CP or IFL ?
 Is anybody doing that? Running Linux natively in an LPAR?
 If yes, why?

There are some applications (*cough* SAP *cough*) that demand every single 
cycle you can give them and still want more. VM does add a (small) resource 
overhead. That's the usual reasoning.

Another reason is accounts where IBM had spent a lot of time and money 
convincing the customer that VM was not strategic or  going away real soon 
and their IBMers (and/or customer execs) don't want to lose significant face by 
bringing it back. That reason is common in a lot of Asian customers, Japan in 
particular, although there's a lot of that still perking around in the US 
customer base too (inexperienced salescritters selling what they get the 
biggest bonus for, and not understanding what VM is or does, and getting no 
education about that from anyone). 

LPAR mode is also common in just testing installs -- most sites with z/OS 
have a play/testing LPAR already defined, and sticking Linux in that 
temporarily can sometimes sneak it in under the radar.

Otherwise, Marcy's stating the obvious -- the improvements in manageability and 
supportability for Linux in a VM environment quickly pay for any extra capacity 
or licenses needed to run VM. In almost every case, production use of LPAR mode 
for Linux is a gigantic PITA, and to be avoided if in any way possible.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: dasdfmt slowness

2014-03-04 Thread David Boyes
 I'm still of the opinion that the hardware guys need to step up. The fact that
 you have to go and do multitrack writes of count-key-data with zero filled
 records for the extent you're interested in seems like a huge waste of
 channel bandwidth and controller activity. Unlike the old days we're not
 really formatting anything. The ECKD smarts in the controller/device are
 using the Count/Key information it as metadata.
 
 Imagine rather than having to go through the pantomime of all these writes,
 interrupts, a delays you simply told the device: start-extent, end-extent,
 block size and let it do the dirty work itself. All that would be required 
 would
 be that I/O plus another to write records 0-5 on track 0.

Leave the current code in place, but hook the flashcopy code to write a 
preformatted cylinder pattern, even if flashcopy is generally not available on 
the box. 
STK used to do this very elegantly in their copy-on-write code that built a 
disk as needed from a pool of blocks. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: dasdfmt slowness

2014-03-04 Thread David Boyes
 I have no idea, if this is being investigated. For myself, I found a different
 solution. You can dasdfmt one disk, and then do a flashcopy to all other disks
 of the same size that should be formatted.

The Cornell Minidisk Manager code lives again 8-)



Re: [ANNOUNCE] s390 31 bit kernel support removal

2014-02-14 Thread David Boyes
 However which distribution is currently being used on those 31 bit only
 machines? As far as I know there is no plain 31 bit distribution left.
 Even Debian switched to a 64 bit kernel since Debian Squeeze.

I'd have to check, but if I remember correctly, most are running late versions 
of lenny with stuff backported from the later releases. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: zLinux Question

2014-02-14 Thread David Boyes
If the person is interacting ONLY with the Linux portion, you need to learn a 
few things about the mainframe but not a lot -- you could compare the problem 
to learning/understanding a new BIOS. The stuff inside the Linux guest is the 
same as on other platforms.

If the person is responsible for the entire environment (virtualization, 
automation, etc), then mainframe expertise (and specifically, z/VM) is more 
important. 
Most organizations treat that as the logical dividing line -- existing 
mainframe people manage the container (z/VM, hardware, etc), and the Linux 
folks manage what goes into the container. 

Don't forget to incorporate some networking folks into the team, too. 

 -Original Message-
 From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
 Mark Smith
 If running Linux on an IBM mainframe, do you need to have a mainframe
 expert to administer the system or is a Linux expert sufficient?

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: [ANNOUNCE] s390 31 bit kernel support removal

2014-02-13 Thread David Boyes
   After the removal of the 31 bit kernel support it is not possible to
   run new Linux kernels on old 31 bit only machines.  The only
   supported 31 bit only machines were the G6 and Multiprise 3000
   introduced in 1999.  However after nearly 15 years it seems
   reasonable to remove support for these old machines.

Mark Post made a good point. I think you (IBM) should consider just 
relinquishing responsibility for the 31-bit s390 kernel if you no longer 
have/want to spend resources to maintain it. Removal is kinda overkill. 

There are a LOT of MP3000 H30s and H50s still out there doing useful stuff, and 
many of those machines are the only remaining IBM system in the customer. 
Throwing them (and that regular renewal revenue stream) over the side with no 
options seems counterproductive. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: [ANNOUNCE] s390 31 bit kernel support removal

2014-02-13 Thread David Boyes
 OK, dumb question of the day.It's linux right?  Why would you keep one of
 those machines for Linux when you could go down to best buy and get
 something with more horsepower?
 Unless you lost the source code or something...

Short answer: by now, the H30/H50 is almost always completely paid for, doesn't 
require any additional extra space or power, doesn't imply increases to MLC and 
software charges, and adding Linux applications to it adds value to the 
machine, and is another reason to prevent/avoid an expensive migration process 
that likely as not won't improve their business or operations (in almost every 
case, the non-IBM replacements for their VM or VSE-based systems are less 
reliable and less functional). Many of these customers have long term 3rd party 
hardware support contracts, and any change in the hardware to modern IBM gear 
would be dramatically more expensive. Many of these customers also still have 
internal DASD in the MP3Ks, and can't afford moving to external disk. IBM also 
really doesn't have much to offer these customers; was trying to help a IBMer 
with a customer like this in rural Louisiana who wanted to migrate of a H50, 
but couldn't -- every option IBM possessed cost at least 3 times what they were 
paying in MLC charges, even hosting the whole mess on IBM-owned gear in a SO 
center. A zPDT would have been an awesome solution for them -- but they 
couldn't qualify. 

These are SMALL customers (obviously, if they can continue to live on H30/H50 
hardware) -- school districts, little manufacturing companies, small 
cities/towns, that kind of customer. They have zero margins, and zero upgrade 
money. If they can continue to get more out of what they have (and improve 
services -- example case: the z/VM 4.4 SSL server could only serve 200 
connections. Period. A Linux-based SSL server could handle close to 900 on the 
same iron), then they win AND they stay on IBM technology and keep paying those 
MLC bills month after month. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: [ANNOUNCE] s390 31 bit kernel support removal

2014-02-13 Thread David Boyes
 putting a
 modern Linux on 15 year old machines seems weird when you can buy an
 intel or maybe if thats too much maybe recycle a PC that was running
 windows XP?

It's not that weird if it's the most stable system you have and it has enough 
spare capacity to do the job adequately. As you know from experience, in these 
very little customers, that's likely to be the case. They might not have any 
other choice with spending freezes in effect -- even PCs cost money.   

Heiko asked for comments/opinions. I think he may have gotten more than he 
expected. 8-)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: ckd device size for DS8300

2014-02-07 Thread David Boyes
The largest ECKD volume I've seen anyone define was 100G. Most people just give 
up on ECKD disk and use FCP disk for chunks larger than a mod 54 -- too much 
hassle to manage LVM devices to get large contiguous chunks.

 Somewhat off topic - my storage admins are looking at defining the largest
 reasonable CKD device for a DS8300. We currently use Mod-3, 9, 27, and 54
 definitions and would like to know what  is a useful very large size in the 
 real
 world for large, less-heavily-used devices.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: EMC request for GTFTRACE on z/VM and zLinux

2014-02-04 Thread David Boyes
Another side effect of System z = z/OS mentality. This request makes no sense 
at all. If it's a direct attach FCP device, you'll have to run the trace in 
Linux, and the GTF utilities don't run there. Look at the CP TRACE command and 
then process the resulting CP monitor data if this is an EDEV. 

 We are currently having issues with a db2 server running under SLES 11 SP2
 using zfcp to access an EMC SAN.  EMC has requested a GTFTRACE trace.  Is
 there even a way to run GTF under z/VM and would it make any sense since
 this seems to be a Linux issue?  If there is a way, what is the process to 
 run a
 GTF trace (I would assume it would have to be run under maint or some
 other z/VM support ID).

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Oracle and Virtual CPU's

2014-02-04 Thread David Boyes
 We have a linux guest that got really busy for about 10-15 minutes
 nightly.   Oracle process. Guest has 2 virtual CPU's defined.
 
 1) Our Oracle DBA (consultant and I believe he is coming from an intel
 world) says we need more CPU's.   I say no.   Who's right and why?

This is not a problem unique to VM -- VMWare and Xen suffer the same issue.

You're both partially right. The workload may need more REAL CPUs (in that 
there may not be enough real cycles available to meet the demand at a point in 
time), but defining more virtual CPUs will probably make the problem worse 
(your dispatch timeslice for the whole virtual machine is divided as equally as 
possible between the # of virtual CPUs defined, so defining more virtual CPUs 
actually DECREASES the amount of processing time available to each virtual CPU 
per timeslice). It also depends a lot on what the Oracle instance is being 
asked to do - some activities in Oracle aren't really very MP-friendly, so even 
if you DID add the virtual CPUs, it wouldn't make any difference because the 
code won't care (the task is scheduled on a virtual CPU and just runs until the 
timeslice is exhausted). If you have lots of tasks like that, the number of 
CPUs is irrelevant; the code is only going to use one at a time. 

Monitor data on the VM side will tell you more about how the real CPUs are 
being used in total; the performance data inside the VM will tell you how Linux 
is allocating workload to the virtual CPUs it sees, but that data alone is 
totally unreliable for capacity planning. It can only reliably see the division 
of labor, not the overall available machine usage. 

 2) on another guest on the same LPAR, we have 4 CPU's defined just to run
 Oracle (for PeopleSoft).  I've never seen the CPU's 250% (out of 400%).
 Should we drop it down to 3 (The oracle DBA says no and wants more).

See above. If he's just looking at data from inside the virtual machine, more 
virtual CPUs make the problem worse. 

Ask him what the problem workload is. If it's single long-running queries 
(Peoplesoft does a lot of those, and they're often stupidly constructed), more 
CPUs won't help. He'll likely get more bang for the buck optimizing the queries 
or adding indexes, but that's more work for him. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Building a usable DVD

2014-01-28 Thread David Boyes
Make sure your DVD creator software finalizes the disc before you take it out 
of the PC. DVDs aren't readable on arbitrary systems until the disk is 
finalized.

It'd be a lot more useful if IBM shipped a DVD image instead of the raw files. 
Most DVD creation software automatically does the Right Thing with a .iso file. 

Fwiw, the Joliet extensions are the windows file system extensions that allow 
file names longer than 8.3. Systems running WinXP or later automatically have 
them enabled.



 On Jan 28, 2014, at 9:08 PM, JO.Skip Robinson jo.skip.robin...@sce.com 
 wrote:
 
 I'm at my wits end. I've been trying to build a DVD for z/VM 6.3. I cannot
 make one that the HMC recognizes and reads. I started by downloading the
 code to my PC as cd760531.zip . When I unzipped this file, I got directory
 CPDVD containing 1257 files. I then burned this directory and content to a
 DVD. It looks perfectly fine on my PC, but attempt to LOAD it from the HMC
 fail with a message saying that the 'target' (I assume this means the z/VM
 LPAR) cannot access the media.
 
 I've opened an SR with z/VM. They suggested using DVD-R (no better) and
 'Joliet file system extensions'. Whatever Joliet means, I don't know any
 way to invoke it. I can't find the original DVD I used for z/VM 6.1, but
 the file set looks analogous to 6.3. I never had this kind of problem with
 6.1. I've tried this process on three different HMCs.
 
 Any user-experience suggestions?
 
 
 .
 .
 J.O.Skip Robinson
 Southern California Edison Company
 Electric Dragon Team Paddler
 SHARE MVS Program Co-Manager
 626-302-7535 Office
 323-715-0595 Mobile
 jo.skip.robin...@sce.com
 
 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LINUX-390 Digest - 14 Jan 2014 to 15 Jan 2014 - Unfinished

2014-01-17 Thread David Boyes
 Ø  If backleveling the kernel makes the problem go away, then it's clearly a
 kernel logic problem in the s390x port.
 Thanks!  That was my logic too; it's nice to have it validated by someone who
 knows more about the kernel than I do.

When you eliminate the impossible, what remains -- no matter how improbable -- 
must be the truth.

Now the hard part starts -- what is the logic problem and how do you get it 
fixed?  8-)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Swap behavior change between SLES 11SP2 and 11SP3?

2014-01-15 Thread David Boyes
I'm not sure what you would expect VM to be able to do here.  All OSes pretty 
much suspend operations while this kind of what can I do without  
decision-making is going on inside the kernel.  If the Linux kernel is taking 
it's time sorting out what pages are clean/dirty/pageable, the only thing VM 
can possibly do is execute the page write/read as quickly as possible *once the 
decision is made*, and it's demonstratably doing just that. 

If backleveling the kernel makes the problem go away, then it's clearly a 
kernel logic problem in the s390x port. 


 -Original Message-
 From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
 Veencamp, Jonathon D.
Which is a shame, ZVM VDISK should
 give us more flexibility than it seems to.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: oracle question

2013-12-30 Thread David Boyes
 I have a request for a linux server that will be hosting an oracle data base.
 The requestor has requested 8G of main memory and 8G of swap (Our
 normal server usually has 1.5G of swap - .5 on dasd and 1G on vdisk).
 Does anyone have any thoughts about giving this person 8G of swap ?

First reaction: way, way too much unless they have usage data that shows it in 
use. If they're sizing based on deployments on other platforms, they need to 
look at it again; most times they want huge memory sizes because it was that 
way on another platform where I/O was extremely expensive and they needed lots 
of SGA space to cushion the I/O load. 

But, if they can show use, large swap may be needed to soak up an occasional 
spike for really large queries. 

 Are there any recommendations for an oracle server ?

It's workload dependent. One-size-fits-all really can't work. Get them to 
actually measure what the server is doing, and then start with about half what 
they had on the previous platform and work up. It's really easy to change 
resource allocations, so adding more is a lot easier than trying to take it 
away once they've got it. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: oracle question

2013-12-30 Thread David Boyes
 Thanks for all your answers. I guess I'll give him another vdisk of 4G and if 
 he
 needs more, I can swapgen another vdisk.

Yeah, small incremental increases usually are good thing. 

 Is one 8G swap device better than 4 2G swap devices ? It's a lot easier adding
 another swap device than having to increase an existing swap.

Since there can be only one outstanding I/O for a device number in the s390x 
architecture (putting PAV aside, since it doesn't apply to VDISK), having more 
than one device _usually_ works better. VDISK is really, really fast, but 
having options for Linux to initiate multiple page I/O requests can help. 

One pattern I've seen used repeatedly is: 

3 swap disks in priority order (see the man page for swapon):

1. VDISK half size of main memory
2. VDISK size of main memory
3. Real MDISK size of main memory. 

The VDISKs don't take up space unless they're actually used. You monitor swap 
usage, and if you ever get more than half way into the 2nd VDISK, time to up 
main memory size. If you get into the real MDISK, things will get really icky 
really fast -- something your automation should be checking. 

Keep an eye on the amount of VM paging space you have allocated too -- if you 
get a workload spike with lots of VDISK active, that's where it comes from, and 
if you run out, Bad Things happen. A good target is to have VM page space about 
50% full at max, again multiple smaller devices may perform better than larger 
devices. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: oracle question

2013-12-30 Thread David Boyes
One more thing: check to see if your VM LPAR has some XSTOR defined. VM paging 
implements a main store- XSTOR- real disk data migration path that helps a 
lot with high paging levels and VDISK. 

Your VM performance monitor will tell you lots of interesting stuff wrt paging 
performance. 
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: oracle question

2013-12-30 Thread David Boyes
 Only for z/VM 6.2 and previous..with 6.3, XSTOR is no longer
 recommended, and in fact will be the last release to support it.

Good point. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: oracle question

2013-12-30 Thread David Boyes
See the help file for v1310 of SWAPGEN... 8-)

If you haven't already done so, make sure the VDISK system and per-user limits 
are set to Infinite in SYSTEM CONFIG and that you have enough VM page space to 
back the demand. 

 I tried using swapgen to define a 2G (4194304 blks) space and it says vdisk
 space not available

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: oracle question

2013-12-30 Thread David Boyes
 Thanks - I ended up having 4 swap disks (1 500M dasd, one 1G vdisk and 2 2G
 vdisks - total of 5.5G). I just hope that it's enough. Time will tell.

Just make sure that the real DASD one is used last by making sure the VDISK 
ones are prioritied.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


MEMO XMASGIFT

2013-12-23 Thread David Boyes
In the tradition of releasing something nifty to the community as a holiday 
gift (the subject line is the historical reference to the annual posting of 
such gifts to the venerable VMSHARE conference from long, long ago), SNA is 
pleased to provide two gifts to the community for 2013.

Gift #1: SWAPGEN version 1310

This version of SWAPGEN provides a few new features that people have asked for.


1.   SWAPGEN no longer depends on RXDASD MODULE for I/O to FBA disks. The 
functions of RXDASD MODULE have been replaced with CMS Pipes code. (Credit to 
Dave Jones)


2.   SWAPGEN is now fully converted to the use of CMS message repositories 
for all text I/O. No code changes will be needed to allow local customization 
of error messages and/or text I/O.


3.   SWAPGEN is now fully internationalized. All supported languages for 
z/VM 5.4 and higher have been included (for z/VM 5.4, German, American English, 
Kanji, and uppercase English, for z/VM 6.1 and higher, American English, Kanji 
and uppercase English). Special thanks to Margarete Ziemer at SNA for 
contributing the German translation. The Kanji messages probably have some 
mistakes; if any of you who speak Japanese and run your systems in Kanji would 
look at that and contribute corrections, we'd appreciate it a lot. For those of 
you with systems older than 5.4 that still use other languages in the default 
VMFNLS LANGLIST file (French, Portuguese, Spanish, etc.), the files are there 
to support the languages, but contain the uppercase English version of the 
messages. Contributions are welcome if you'd like your native language to be 
supported.  (Note that IBM no longer ships these languages post-z/VM 5.4).


4.   SWAPGEN now has full message help file support. As a result of #2, all 
SWAPGEN messages now have full message IDs and severity information. HELP MSG 
msgid will provide detailed information on each message and suggestions on what 
to do if something goes wrong. To get this, you need to install the help file 
package shown below.


5.   There are now three packages available for SWAPGEN:

SBIN.VMARC -- The minimum files required to run SWAPGEN (the exec, the
  main help file, and the message repositories). If you just
  want to use SWAPGEN, this is all you need.

SHLP.VMARC -- The extended help files for each SWAPGEN message. If you
  download and install this package, you can type
  HELP MSG SWP (the message id) and get detailed
  explanations of each SWAPGEN message. Not mandatory, but
  we STRONGLY recommend you install these.

SSRC.VMARC -- The full source code to SWAPGEN and all its component
  parts. If you speak Japanese, PLEASE download this and
  translate the message repository! The other two VMARC
  files are contained in this package, so if you want to
  have the whole thing in one burrito, this is it.

The new files for SWAPGEN are available from 
http://download.sinenomine.net/swapgen

Gift #2: New version of smaclient (v 1.1)

Smaclient is a shell script allowing any Linux or Unix system to interact with 
the z/VM SMAPI servers to perform system management actions on a z/VM system 
from a script running on the Unix/Linux system.

This version adds:


1.   Corrections to a number of responses and queries fixed by VM65290, 
specifically:
Virtual_Network_VLAN_Query_Stats
Virtual_Network_Vswitch_Query_Extended

Virtual_Network_Vswitch_Query_Stats


2.   The script is now packaged as a noarch RPM, so that it will show up in 
the rpm software inventory with correct versioning.



The code is available from http://download.sinenomine.net/smaclient


Happy holidays to all of you.

David Boyes
Sine Nomine Associates

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Mirroring and recovering LVM volumes

2013-12-17 Thread David Boyes
 You can also use LVM itself to mirror the data.  See man lvcreate for the -
 m/--mirrors option.
 
 Either way, I don't see any reason why you shouldn't use the SVC itself to
 mirror the disk(s).

I'd second this approach. One less thing to deal with in software configuration 
management. 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: RPM Dependencies issue when trying to install with yum

2013-12-13 Thread David Boyes
Yum is the functional equivalent. zypper is SuSE-specific. 

 I'm on RHEL... so this option may not be available to me.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LVM thin provisioning

2013-12-07 Thread David Boyes
Like I said, working on something better...8-)

ISCSI is also an interesting player here.



 On Dec 6, 2013, at 11:47 PM, Christian Paro christian.p...@gmail.com 
 wrote:
 
 And rather than AoE, I should have said NBD, since this isn't SATA. But
 otherwise I think the idea is an interesting one.
 
 Another option, providing storage virtualization and thin provisioning,
 aside from LVM would be Ceph: http://ceph.com/docs/master/rbd/rbd/
 
 ...which is designed to work as a remote virtual block device (or file
 system, or object store) in the first place.
 
 
 On Fri, Dec 6, 2013 at 10:38 PM, Christian Paro 
 christian.p...@gmail.comwrote:
 
 Crazy thought...
 
 ...you could create a Linux LPAR or VM that manages a large LVM pool with
 thin-provisioned volumes, and export these volumes as filesystems over NFS
 or as block devices with AoE.
 
 Then you could build your thin Linux VM guests with a small boot volume
 (possibly even a read-only shared one) and their / filesystem mounted
 over the NFS or AoE (given that you've configured your kernel/initramfs to
 support the chosen protocol).
 
 The LVM thin snapshot mechanism could even be used on the storage host to
 create fast-copied Linux guests with a shared base image that is only
 amended in a copy-on-write manner for those portions of the volume which
 are changed by that guest as it runs. Given a big memory cache on the
 storage host, this could even help provide the benefit of shared in-memory
 caching of all the common OS/application binaries included in that base
 image.
 
 And the model from Mike MacIsaac's Sharing and Maintaining * papers
 could be adapted over this model to provide on the mainframe a lightweight
 provisioning experience much like what can be had with container systems
 like Docker/CoreOS on distributed - except with the security benefits of
 full virtualization under z/VM.
 
 
 On Fri, Dec 6, 2013 at 11:34 AM, David Boyes dbo...@sinenomine.netwrote:
 
 SFS pretty much does exactly that -- for CMS users. You can provide
 access to files stored in SFS for Linux via the CMS NFS server. Not exactly
 high-performance (dispatching 2 or 3 virtual machines to handle each
 transaction is kinda heavyweight), but it works.
 
 Working on something better. 8-)
 
 I would like z/VM to provide a capability to add up DASD devices into a
 kind
 of large pool and place image files like qcow2 (or something similar)
 in it.
 Wether this image is presented as ECKD or something different to the
 virtual
 machine doesn't really matter to me. I don't know wether this wish is
 realistic, but i like this feature on my Linux/x86 environment -
 although i am a
 System z guy for 20 years by now.
 
 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
 visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/
 
 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: LVM thin provisioning

2013-12-06 Thread David Boyes
SFS pretty much does exactly that -- for CMS users. You can provide access to 
files stored in SFS for Linux via the CMS NFS server. Not exactly 
high-performance (dispatching 2 or 3 virtual machines to handle each 
transaction is kinda heavyweight), but it works.  

Working on something better. 8-)

 I would like z/VM to provide a capability to add up DASD devices into a kind
 of large pool and place image files like qcow2 (or something similar) in it.
 Wether this image is presented as ECKD or something different to the virtual
 machine doesn't really matter to me. I don't know wether this wish is
 realistic, but i like this feature on my Linux/x86 environment - although i 
 am a
 System z guy for 20 years by now.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: VMCP commands for non-root userids

2013-10-30 Thread David Boyes
Use sudo. 

 I have a non-root UserID that needs to be able to execute VMCP commands.
 I've tried a lot of things, but it has not yield much success.  Any 
 suggestions??

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Whats the best way to share z/VM Volumes between two Redhat guest machines

2013-10-23 Thread David Boyes
If you want both of the servers live at the same time, you need the content 
volume r/o to both systems, or  a cluster file system, or to mount the content 
from a 3rd machine via NFS.  

If you're OK with a single point of failure, you can set up one system as a NFS 
master and then mount the filesystem from the master on the other system. 
Naturally if the NFS server fails, you're SOL. You should also disable fsck on 
boot on the r/o system. 


 -Original Message-
 From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
 Diep, David (OCTO-Contractor)
 Sent: Wednesday, October 23, 2013 11:18 AM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Whats the best way to share z/VM Volumes between two Redhat
 guest machines
 
 Hi,
 
 First time posting here... I hope I found the right place.
 
 I have two web servers, they are duplicates of each other, serving the same
 webpage. They are 'clustered' by a network load balancer. I want to see if I
 can have them share the same volume in z/VM.
 
 One of the RHEL servers will be the primary, having RW authority, while the
 other will have RO authority. What do I need to do in RHEL to make this
 work? Without doing anything to the RO authorized RHEL server, this is what
 happens when I try to start him up:
 
 [FAILED]
 
 *** An error occurred during the file system check.
 *** Dropping you to a shell; the system will reboot
 *** when you leave the shell.
 Give root password for maintenance
 (or type Control-D to continue):
 
 Any ideas would be most appreciated!
 
 
 
 
 
 Serve DC is proud to present NeighborGood, a new, free tool to help
 residents engage in meaningful service and connect with the causes and
 organizations they care about. Visit NeighborGood at
 http://serve.dc.gov/service/neighborgood
 
 
 --
 For LINUX-390 subscribe / signoff / archive access instructions, send email to
 lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


  1   2   3   4   5   6   7   8   9   10   >