Re: Moving z/vm and zlinux volumes from old dasd to new dasd

2018-08-17 Thread Collinson.Shannon
FYI--we did this recently using FDRPASVM (Innovation Data's z/VM companion to 
FDRPAS for z/OS), and man, it was quick, easy, and no downtime.  Of course, we 
were doing it alongside z/OS data on the same storage controllers, and we'd 
already owned FDR products for years, so it wasn't a big expense or change in 
procedure for us.  Sure made it simple, though, if you've got a lot to move...

Shannon Collinson

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Scott 
Rohling
Sent: Thursday, August 16, 2018 7:39 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Moving z/vm and zlinux volumes from old dasd to new dasd

Ah  - I see you meant for this to be run against new DASD -- assuming the
copy had been done...   I wasn't sure how/when the DASD would be copied to
the old...so it does depend...   the script should be run before or
after depending on the method ;-)

Scott Rohling

On Thu, Aug 16, 2018 at 4:35 PM Scott Rohling 
wrote:

> Good script -- I would use it 'after' the move though - not before..
> backout is much easier that way...   make changes on the new dasd - not the
> old.
>
> Use SAPL and change PDVOL= to correct address...   'then' run this script
> once everything comes up and looks good.   IMHO.
>
> Scott Rohling
>
> On Thu, Aug 16, 2018 at 4:16 PM Davis, Larry (National VM Capability) <
> larry.dav...@dxc.com> wrote:
>
>> For that process to work you will have to load a new Standalone program
>> Loader using the new Device address for the PDR volume on the PDVOL =
>>
>> Here is an example of an exec to load the IPL SALIPL file from the MAINT
>> 190 "S" disk
>>
>> MKSAIPL EXEC:
>> /* rexx exec */
>>
>> true  = (1=1)
>> false = \true
>> debug = TRUE
>>
>> /*  */
>> /*  */
>> /* Setup Procedures:*/
>> /*   1 NEED ACCESS TO THE Z/VM 6.4 IPL SALIPL TEXT DECK */
>> /*   2 If this is SSI Environment set to TRUE otherwise FALSE   */
>> /*   3 Is the virtual address the address the RESVOL is linked as?  */
>> /*   4 Is the volid of the RESVOL?  */
>> /*   5 Is the real address of the PDR VOLUME?   */
>> /*  */
>> /*  */
>>
>> ssi  = FALSE /* Set to TRUE when using SSI*/
>> DEV_ADDR = ''/* RESVOL Virtual Address*/
>> VOL_ID   = 'vv'  /* RESVOL Volume Serial Number   */
>> PD_VOL   = ''/* PDR VOLUME Real Address   */
>>
>>/* *** */
>>/* PDNUM is the offset extent for the CF0 PARM disk*/
>>/* PDVOL is the Real address for the PDR Volume*/
>>/* *** */
>>
>> if ssi Then do
>>   Say
>>   Say 'Loading SALIPL text deck for SSI on RESVOL' vol_id 'on' dev_addr
>>   Say
>>   IPL_PARM = 'FN=SYSTEM FT=CONFIG PDNUM=1 PDVOL=' pd_vol end Else do
>>   Say
>>   Say 'Loading SALIPL text deck on RESVOL' vol_id 'on' dev_addr
>>   Say
>>   IPL_PARM = 'FN=SYSTEM FT=CONFIG'
>> end
>>
>> if debug then do
>>salp_opt = 'EXTENT 1 VOLID' vol_id ,
>>   'IPLPARMS' ipl_parm
>>say ' ADDRESS COMMAND SALIPL' dev_addr
>>say '  (' salp_opt
>> end
>> else do
>>salp_opt = 'EXTENT 1 VOLID' vol_id ,
>>   'COMMENTS ? IPLPARMS' ipl_parm
>>
>>queue 'IPL Parameters Options are:  '
>>queue 'CONS= FN=fn FT=ft CLEARPDR REPAIR NOEXITS NOHCD PROMPT'
>>queue 'PDNUM=n PDOFF=offset PDVOL=addr STORE=M/G/T/P/E  '
>>queue
>>
>>ADDRESS COMMAND 'SALIPL' dev_addr '(' salp_opt end
>>
>> Exit
>>
>>
>>
>> Larry Davis
>>
>>
>> -Original Message-
>> From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
>> Davis, Jim [PRI-1PP]
>> Sent: Thursday, August 16, 2018 17:54
>> To: LINUX-390@VM.MARIST.EDU
>> Subject: Re: Moving z/vm and zlinux volumes from old dasd to new dasd
>>
>> Thanks for the info. I get the following.
>>
>> Current IPL parameters:
>> FN=SYSTEMFT=CONFIGPDNUM=1  PDVOL=A84C
>> CONS=0908
>>
>> The PDVOL points to common volume which contains the SYSTEM CONFIG file.
>>
>> I wonder if I can
>> Change PDVOL to the new address for the common volume.
>> Shutdown
>> Move all six volumes to their new addresses.
>> IPL from new res volume address.
>>
>>
>> -Original Message-
>> From: Linux on 390 Port  On Behalf Of Tom Huegel
>> Sent: Thursday, August 16, 2018 5:36 PM
>> To: LINUX-390@VM.MARIST.EDU
>> Subject: Re: Moving z/vm and zlinux volumes from old dasd to new dasd
>>
>> Last question Q IPLPARMS
>>
>>
>> On Thu, Aug 16, 2018 at 4:14 PM, Davis, Jim [PRI-1PP] <
>> jim.da...@primerica.com> wrote:

Easy/automated way to convert LDL-formatted disks to 1-partition CDL?

2018-08-03 Thread Collinson.Shannon
We're attempting to upgrade all our RHEL6 servers to RHEL7 and have just hit a 
huge snag that didn't come up in testing (due to our test servers being rebuilt 
pretty often).  All our company-used servers still have LDL formatted minidisks 
on them, and the upgrade process, at least, doesn't like that one bit--the 
RHEL7.5 dracut from the RHEL7 ISO can't seem to sense the partitions on them, 
and thus isn't able to actually use them during the reboot portion of the 
upgrade.  Is there any easy or automated way to convert all the disks of a 
server to CDL-format (while that server is down, I'm assuming), so we could pop 
back up and run the upgrade?

I figure given enough time I could somehow script this, but to keep from this 
project shifting from upgrade-to-RHEL7.5 down to upgrade-to-RHEL6.10, I need to 
come up with something by Monday.  That doesn't feel like enough time.  It's 
definitely not enough time to convince RedHat to update RHEL7 so it continues 
to recognize LDL-format, even if I can't find anything saying it's no longer 
supported.  I did find plenty saying things like "anaconda no longer supports 
LDL in RHEL7.1", which is probably about when we changed our kickstarts to 
start reformatting the z/VM disks before installing RedHat--the reason we 
didn't see this in test.  And it doesn't seem like many folk have been going 
the upgrade-in-place route on s390x based on other issues we've hit, so that 
might explain why it doesn't seem to be popping up in current searches.

Anyone out there able to grant us a miracle?  Thank you!

Shannon Collinson | AVP | Infrastructure Operations, Mainframe Engineering
SunTrust Bank | Take a step toward financial confidence. Join the movement at 
onUp.com.
Office: 404.827.6070 | Mobile: 404.642.1280 | Mail Code GA-MT-1951 | 285 
Peachtree Center Ave 19th Floor | Atlanta, GA 30303


LEGAL DISCLAIMER
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer.
By replying to this e-mail, you consent to SunTrust's monitoring activities of 
all communication that occurs on SunTrust's systems.
SunTrust is a federally registered service mark of SunTrust Banks, Inc.
[ST:XCL]

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Anyone running IBM BigFix client on z?

2017-06-02 Thread Collinson.Shannon
We are too.  According to my BigFix guy, he created a "group and site with 
auto-membership based on the OS-name" that throttled the cpu the clients are 
allowed to use (.5% in our case--looks like Harley's folk let them have a 
little more) and restricted gathering the inventory to what we determined was 
the off-hours (think it turns out to be 5-6 in the morning).  So any new s390x 
server that comes online with the bigfix client installed and checks in to the 
hub would get those settings automatically.

Does that help any?  It's kinda greek to me, but I'm hoping it'll make sense to 
anyone familiar with bigfix customization.  I do know that since we did it, we 
haven't had any issues.

Shannon Collinson (SunTrust Bank)


-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Harley 
Linker
Sent: Friday, June 02, 2017 2:13 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Anyone running IBM BigFix client on z?

I am. The team that manages BigFix on the x86 server tweaked a setting so that 
it didn't check the z servers quite as often.  The change they made, which they 
didn't tell me what it was, reduced the CPU percentage to .7 per server.


Harley

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Marcy 
Cortes
Sent: Friday, June 02, 2017 12:54 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Anyone running IBM BigFix client on z?

aka BESClient

We're seeing 1-2% cpu per guest.
While that's not a big deal on a x86 standalone box, it's a big deal on a box 
with 400 servers on it.

Hoping it's tunable...


Marcy

This message may contain confidential and/or privileged information. If you are 
not the addressee or authorized to receive this for the addressee, you must not 
use, copy, disclose, or take any action based on this message or any 
information herein. If you have received this message in error, please advise 
the sender immediately by reply e-mail and delete this message. Thank you for 
your cooperation.

--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit http://wiki.linuxvm.org/ © 
2017 Ensono, LP. All rights reserved. Ensono is a trademark of Ensono, LP. The 
information contained in this communication is confidential, is intended only 
for the use of the recipient named above, and may be legally privileged. If the 
reader of this message is not the intended recipient, you are hereby notified 
that any dissemination, distribution or copying of this communication is 
strictly prohibited. If you have received this communication in error, please 
resend this communication to the sender and delete the original message or any 
copy of it from your computer system.
LEGAL DISCLAIMER
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer.
By replying to this e-mail, you consent to SunTrust's monitoring activities of 
all communication that occurs on SunTrust's systems.
SunTrust is a federally registered service mark of SunTrust Banks, Inc.
[ST:XCL]


configuring the s390 dump tools to dump to a shared device?

2013-11-12 Thread Collinson.Shannon
Has anyone tried to setup dumpconf to dumpreipl using a shared device?  I was 
thinking that with the vmcmd option, there might be a way to do it, and thus 
only have one disk (or set of disks) tied up for possible dumps.  Something 
like the following commands:

LINK MAINT 999 999 MR
CPU ALL STOP
STORE STATUS
I 999
DET 999
I CMS

However, that'd predicate being able to run zipl -d against the 999 disk on a 
single zlinux server that may or may not have the same RAM configuration as the 
server that eventually dumps.  Does anyone know if this can be done?  I'm not 
sure what's included in the dump ipl record...  Thanks for any 
hints/tips/exclamations that the idea is completely out to lunch!

Shannon Collinson, AVP, Mainframe Engineering
SunTrust Bank
Office: 404.827.6070 | Mobile: 404.642.1280 | Mail Code GA-MT-1750 | 245 
Peachtree Center Ave Suite 1700 | Atlanta, GA 30303

How Can We Help You Shine Today?

LEGAL DISCLAIMER
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer.
By replying to this e-mail, you consent to SunTrust's monitoring activities of 
all communication that occurs on SunTrust's systems.
SunTrust is a federally registered service mark of SunTrust Banks, Inc.
[ST:XCL]

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: configuring the s390 dump tools to dump to a shared device?

2013-11-12 Thread Collinson.Shannon
That'd be wonderful!  And I see you've already sent it--thank you!

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Marcy 
Cortes
Sent: Tuesday, November 12, 2013 5:34 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: configuring the s390 dump tools to dump to a shared device?

Yes, we set aside one volume, labeled VXLDMP.  It's a mod 27 (technically 32760 
cyls) here.
It's generally attached to no one.
When we need it we attach it to the guest from a class B userid.
There's no reason why it can't be a full pack mdisk like you suggest.  

I can send you a word doc if you'd like.

Marcy

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of 
Collinson.Shannon
Sent: Tuesday, November 12, 2013 2:03 PM
To: LINUX-390@VM.MARIST.EDU
Subject: [LINUX-390] configuring the s390 dump tools to dump to a shared device?

Has anyone tried to setup dumpconf to dumpreipl using a shared device?  I was 
thinking that with the vmcmd option, there might be a way to do it, and thus 
only have one disk (or set of disks) tied up for possible dumps.  Something 
like the following commands:

LINK MAINT 999 999 MR
CPU ALL STOP
STORE STATUS
I 999
DET 999
I CMS

However, that'd predicate being able to run zipl -d against the 999 disk on a 
single zlinux server that may or may not have the same RAM configuration as the 
server that eventually dumps.  Does anyone know if this can be done?  I'm not 
sure what's included in the dump ipl record...  Thanks for any 
hints/tips/exclamations that the idea is completely out to lunch!

Shannon Collinson, AVP, Mainframe Engineering SunTrust Bank
Office: 404.827.6070 | Mobile: 404.642.1280 | Mail Code GA-MT-1750 | 245 
Peachtree Center Ave Suite 1700 | Atlanta, GA 30303

How Can We Help You Shine Today?

LEGAL DISCLAIMER
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer.
By replying to this e-mail, you consent to SunTrust's monitoring activities of 
all communication that occurs on SunTrust's systems.
SunTrust is a federally registered service mark of SunTrust Banks, Inc.
[ST:XCL]

--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


any good recommendations for an HA tool on zSeries?

2013-10-04 Thread Collinson.Shannon
We're running RHEL6.2 and RHEL6.4 on our zlinux servers under z/VM 6.2 and so 
far have been handling HA by just clustering servers--yeah, any transaction in 
process to a server that's gone down would be whacked in mid-air, but anything 
new would route to the cluster-buddy that was still up.  That's not true 
high-availability, though, and won't work for all our potential applications.  
Of course, we're looking at SSI which will help out for planned outages, but 
we'd also like to be able to do something in the case of a server-crash (i.e. 
have some sort of heartbeat-monitor that could pop up/activate an application 
on a different server if it noticed something was down).  We'll be 
investigating Tivoli Systems Automation for Multiplatform, and the Sine Nomine 
HAO (High Availability Option), plus I intend to see if the RHEL HA add-on is 
compatible with zSeries, but I'm wondering if there's any other good products 
to explore (as well as anyone's experiences with the above products).  We did a 
cursory look at LinuxHA, but unfortunately our management is not keen on using 
freeware, even though price will definitely be a consideration in whatever we 
decide on.

Any comments from those in the field actually exploiting HA for zSeries at 
their shops?

Thanks!

Shannon Collinson, SunTrust Bank, Atlanta, GA
LEGAL DISCLAIMER
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer.
By replying to this e-mail, you consent to SunTrust's monitoring activities of 
all communication that occurs on SunTrust's systems.
SunTrust is a federally registered service mark of SunTrust Banks, Inc.
[ST:XCL]

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: any good recommendations for an HA tool on zSeries?

2013-10-04 Thread Collinson.Shannon
We do in fact have GDPS, but only GDPS/XRC for our Disaster Recovery--and that, 
of course, doesn't cut it for HA.  GDPS Hyperswap is a little out of the budget 
for now (man, I do not even want to contemplate our DS8700's going down on the 
storage side).  And to Marcy's point, where possible we would want to use the 
same tools across the enterprise (so we are indeed looking at Oracle RAC for 
the oracle databases we're planning to migrate), but management wanted us to 
offer some sort of integrated generic linux solution for applications that 
didn't have any specific (or supported-on-z) tools.  zLinux is a 
small-but-growing segment of our relatively small number of Linux 
servers--we're predominantly running the bigger non-mainframe applications on 
AIX servers with HACMP, so that's what we're competing with in trying to entice 
other applications to join our middleware MQ/Broker servers on zlinux.  If we 
pick up Sine Nomine HAO or Tivoli Systems Automation for Multiplatform, we'd be 
looking at using them on our intel redhat linux servers as well as the zlinux 
ones to try to cut down on the tool proliferation.

But you're right, Alan--if our storage controllers go dead, we'd be looking at 
activating a DR right now, for some subset of both zlinux and our mainframe 
applications.  I'm going to go knock on something wooden...  For now, we're 
concerning ourselves with the server-side of HA--network redundancy is already 
built  (with multiple OSAs as well), we have multiple lpars and mainframes for 
each environment, and we're trusting to IBM's never-gonna-fail on the storage 
side.

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Alan 
Altmark
Sent: Friday, October 04, 2013 1:45 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: any good recommendations for an HA tool on zSeries?

On Friday, 10/04/2013 at 11:04 EDT, Collinson.Shannon
shannon.collin...@suntrust.com wrote:
 We're running RHEL6.2 and RHEL6.4 on our zlinux servers under z/VM 6.2
and so
 far have been handling HA by just clustering servers--yeah, any
transaction in
 process to a server that's gone down would be whacked in mid-air, but
anything
 new would route to the cluster-buddy that was still up.  That's not
true
 high-availability, though, and won't work for all our potential
applications.
 Of course, we're looking at SSI which will help out for planned
 outages,
but
 we'd also like to be able to do something in the case of a
 server-crash
(i.e.
 have some sort of heartbeat-monitor that could pop up/activate an
application
 on a different server if it noticed something was down).  We'll be
 investigating Tivoli Systems Automation for Multiplatform, and the
 Sine
Nomine
 HAO (High Availability Option), plus I intend to see if the RHEL HA
add-on is
 compatible with zSeries, but I'm wondering if there's any other good
products
 to explore (as well as anyone's experiences with the above products).
 We
did a
 cursory look at LinuxHA, but unfortunately our management is not keen
 on
using
 freeware, even though price will definitely be a consideration in
whatever we
 decide on.

 Any comments from those in the field actually exploiting HA for
 zSeries
at
 their shops?

You are asking the right questions, but recognize that there is no single HA 
management solution.  Real HA is more than just workload distribution.
You have to protect yourself from outages of networks, servers, storage, and 
the components that connect them together and make them go (adapters,
cables, power supplies, etc.) within a single site/campus.   DR is a twist
on HA that drives it to the next level, achieving the same purpose, but across 
longer distances and with a higher tolerance for a service outage.
As others have noted, a good HA solutions can be leveraged for planned outages, 
too.

Networks and servers are fairly straightforward and well understood (bonding 
solutions, app clusters, IP moves, another LPAR, another CPC).  I find 
clients who get all that done and then I discover that they have a single 
storage controller.  They might be replicating to the DR site, but that doesn't 
help them if they lose the local storage frame.  If you have z/OS, then you 
need to look at GDPS, even if only for I/O hyperswap capability.

Alan Altmark

Senior Managing z/VM and Linux Consultant IBM System Lab Services and Training 
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit http://wiki.linuxvm.org/
LEGAL DISCLAIMER
The information transmitted is intended solely for the individual

Re: any good recommendations for an HA tool on zSeries?

2013-10-04 Thread Collinson.Shannon
We do have some F5 appliances, but I think that's just for network routing--if 
there's any kind of F5 that could help manage an HA solution (making a passive 
server active or something like that), we don't have them.  And we are looking 
at Oracle RAC as a possibility for the oracle databases we're hoping to migrate 
to zlinux, but the oracle platform owner wanted to explore something cheaper 
(keeping the critical Oracle stuff that requires RAC on the midrange servers it 
currently uses, but looking for some poor-man's HA to at least provide 
active-passive support on zlinux).  And we tried out the MQ Multi-Instance 
setup for our websphere MQ and Broker servers but could never get it working as 
advertised, so that'll be another application we'll need to support.  Right 
now, as I alluded to below, we're just using an active-active cluster (with 
routing through an F5 load-balancer) for MQ with each server running off its 
own storage--not really what the application owner wants in the long term.  
We're also playing with scripted HA for that which would use shared disks 
across the servers that would be managed by testing to see if the logical 
volume was in use--a really homegrown solution which I think would be more 
problematic than LinuxHA.  

When you say the setup for LinuxHA is complicated, how bad is it?  Did you have 
to resort to bugging SuSE for configuration help, or were you able to work it 
all through with the documentation on the org site/maybe polling the 
interested-users list for it?  not that I think we're anywhere near as 
knowledgeable as you and your team with zlinux, but if you guys had to go to 
the vendor for assistance, we shouldn't even contemplate it!  and I just got 
word that okay, yeah, we can add LinuxHA to the running for the generic HA 
solution we're looking to find.  (I guess reorgs are good in rare occasions, 
such as moving folks obstinate to what seem like good ideas...)

Whatever we come up with would be something we hope could be exploited on all 
Linux servers (on any platform) at SunTrust--chances are, it'd only be 
cost-effective and training-effective if it was common, and right now, we 
actually don't have any standard HA product on our intel Linux side either, so 
this'd be a good time to find one.  Of course we'd want to support any 
application-specific HA solution that the applications wanted to pay for (if 
it could run on zseries), but we'd like to have some kind of generic option for 
those other applications/products that still wanted some way to stay up while 
we were IPLing their z/VM lpars.

Thanks for your consideration/responses!
Shannon 

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Marcy 
Cortes
Sent: Friday, October 04, 2013 12:28 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: any good recommendations for an HA tool on zSeries?

Hi Shannon,

We have all kinds of HA going on.   

What are your distributed folks doing?   Sometimes it is easiest just to use 
what they use.   Appliances work great (one example is F5's GTM and LTM).
The sw products often have their own solution (i.e. DB2 HADR, Websphere ND, 
Oracle RAC) and those are good choices for those products.
We do use some LinuxHA (on SLES for z it is included and supported) for 
clustered file systems.It's complicated, but it can do things like provide 
r/w file systems to multiple server and move IP addresses around for you.
When you think about just activating an app on another server, does it have 
access to the same files?
From a systems management standpoint, we prefer that our applications run 
active-active (that is send traffic to multiple app servers if possible) to 
use capacity on more than one CEC.   IHS in Websphere ND has plugins too where 
you can adjust percentage of traffic to the various app servers. This 
allows us to take whole lpars or CECs out of service for planned maintenance 
as well without much human intervention.

Hope that is helpful.

Marcy

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of 
Collinson.Shannon
Sent: Friday, October 04, 2013 8:03 AM
To: LINUX-390@VM.MARIST.EDU
Subject: [LINUX-390] any good recommendations for an HA tool on zSeries?

We're running RHEL6.2 and RHEL6.4 on our zlinux servers under z/VM 6.2 and so 
far have been handling HA by just clustering servers--yeah, any transaction in 
process to a server that's gone down would be whacked in mid-air, but anything 
new would route to the cluster-buddy that was still up.  That's not true 
high-availability, though, and won't work for all our potential applications.  
Of course, we're looking at SSI which will help out for planned outages, but 
we'd also like to be able to do something in the case of a server-crash (i.e. 
have some sort of heartbeat-monitor that could pop up/activate an application 
on a different server if it noticed something was down).  We'll be 
investigating

Re: Any idea how to get into RedHat PDB mode on a kickstart through a TN3270 console in z/VM?

2013-07-17 Thread Collinson.Shannon
Yep, CPFMTXA works great, but our workaround (using dirmaint to remove then add 
the disks) is just a little quicker for us and we can do multiple disks at the 
same time easily.  We also tried a %pre script that just whacked the first 1024 
then 4096 bytes (used dd to zero them out) but that apparently didn't kill 
enough data to wipe out the memory of datavg--dasdfmt works, but is pretty 
slow due to it doing the full minidisk one at a time.  I saw several similar 
solutions for rhel4 and rhel5, so it seems like they oddly fixed it in early 
RHEL6 then backed off the fix for RHEL6.4!

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of 
carr...@nationwide.com
Sent: Tuesday, July 16, 2013 5:10 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Any idea how to get into RedHat PDB mode on a kickstart through 
a TN3270 console in z/VM?

FWIW and if I've understood the original problem as being Kickstart was not 
formatting disks even though you specify --clearpart 

I had a similar problem to this in the past.
The issue was the volumes being used where not wiped so still contained 
formatted data which kickstart interpreted as not needing to be formatted.
Our solution was to reformat only the first few cylinders of the volume before 
the server  was kickstarted, which forced kickstart to reformat the volume 
correctly.
This was a pain and we had opened a ticket with Redhat on it back in RHEL
4 days.

I am wondering if that maybe somehow that change has gotten lost in 6.4?

Have you tried CPFMTXA or even a CMS Format on the volume to wipe it out then 
tried to kickstart it?

David
I would think you would have to modify the inittab so that a signal sent would 
emulate the key sequence RH is asking for, I recall doing that some time back 
for something else I was working on.

Sandra



From:   Collinson.Shannon shannon.collin...@suntrust.com
To: LINUX-390@VM.MARIST.EDU
Date:   07/16/2013 04:19 PM
Subject:Re: Any idea how to get into RedHat PDB mode on a 
kickstart through a TN3270 console in z/VM?
Sent by:Linux on 390 Port LINUX-390@VM.MARIST.EDU



I'm actually using both of those (RUNKS=1 and cmdline) already.  And I think we 
know where in the code there's a problem--the clearpart --initlabel 
--all/zerombr is not getting run against all disks, just disks mentioned later 
in the kickstart.  In 6.2, we could use it to wipe out the header info from 
every disk the linux OS could see, but in RHEL 6.4, it misses what I call 
extra disks--minidisks we use to have a single kickstart file for 
multiple-sized linux servers that still have the same base requirements.  We 
build out a default of two minidisks in volume group datavg for every server, 
but since that only adds up to 20G, we wanted to be able to have a random 
number of additional disks that'd get added into the datavg in a kickstart 
%post script for servers whose application code will need more than 20G.  Works 
great on a brand-new carved-straight-from-z/VM server, but when we're 
rebuilding a server that's already been built once, since the extra disks 
aren't wiped clean, linux senses that datavg already exists when the 
kickstart tries to create it.  Under RHEL 6.2, since everything was wiped, a 
rebuild was just as easy as a new build.  We can get around it by recreating 
(deleting and
re-adding) the extra minidisks, or by running a %pre dasdfmt against any 
extra disk sensed (which, since it single-threads how I coded it, can take way 
longer than the z/VM minidisk recreate), but that won't help RedHat figure out 
the bug in their code.

That's just a little background, but I figured it might shed some light on what 
we're doing.  I've tried the enter/enter thing mentioned below and in the 
RHEL note, but it doesn't seem to take, which makes me wonder if the real 
problem isn't that the connection to the linux bootstrap-OS is gone when it 
spits out the error.  I seem to get a prompt that expects only a certain 
handful of responses, but I haven't figured out how to enter them either:

Storage Activation Failed
An error was encountered while activating your storage configuration.
vgcreate failed for datavg: 15:46:23,603 ERROR   :   A volume 
group called datavg already exists.
('custom', ['_File Bug', '_Exit installer'])

If I enter custom or _File Bug or _Exit installer, nothing happens. 
Really, when the error pops up, I don't seem to have any recourse from the z/VM 
guest console except logging off or entering #CP I CMS to restart the 
process...

FYI--I've attached the below email to the redhat ticket.  I'm hoping some of 
the stuff I didn't understand in here might spark ideas in the RedHat support 
staff.  Thanks to both you (Steffen) and David, as well as Rick Troth (who sent 
a straight-up) email for your responses so far.  And if you think of anything 
else, please let me know!
Shannon


-Original Message-
From: Linux on 390 Port

Any idea how to get into RedHat PDB mode on a kickstart through a TN3270 console in z/VM?

2013-07-16 Thread Collinson.Shannon
We're trying to research an issue with our kickstarting that cropped up in 
RHEL6.4 (worked perfectly in RHEL6.2).  RedHat support has asked us to wait 
till the kickstart fails then issue CNTL+ALT+F1 to get into pdb mode so 
they can debug the anaconda stuff, but since we boot in z/VM, the terminal 
doesn't appear to translate those keys correctly--at least, I issue them and 
nothing happens.  Does anyone know how to send those to the host operating 
system through a tn3270e console?  Or know of a different way that we could get 
an ascii-based terminal for the automated kickstart if needed?  When I ssh into 
the guest at the failure time, I can hop on as root but I'm not breaking into 
the install process--I can't enter anything that'd change the flow of the 
install.

Really hoping someone on this forum might have an idea, since IBM won't respond 
(not a zVM issue, apparently, that their console doesn't work) and RedHat 
doesn't seem to know...

Thanks!

Shannon Collinson, AVP, Mainframe Engineering
SunTrust Bank
Office: 404.827.6070 | Mobile: 404.642.1280 | Mail Code GA-MT-1750 | 245 
Peachtree Center Ave Suite 1700 | Atlanta, GA 30303

How Can We Help You Shine Today?

LEGAL DISCLAIMER
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer.
By replying to this e-mail, you consent to SunTrust's monitoring activities of 
all communication that occurs on SunTrust's systems.
SunTrust is a federally registered service mark of SunTrust Banks, Inc.
[ST:XCL]

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: Any idea how to get into RedHat PDB mode on a kickstart through a TN3270 console in z/VM?

2013-07-16 Thread Collinson.Shannon
I'm actually using both of those (RUNKS=1 and cmdline) already.  And I think we 
know where in the code there's a problem--the clearpart --initlabel 
--all/zerombr is not getting run against all disks, just disks mentioned later 
in the kickstart.  In 6.2, we could use it to wipe out the header info from 
every disk the linux OS could see, but in RHEL 6.4, it misses what I call 
extra disks--minidisks we use to have a single kickstart file for 
multiple-sized linux servers that still have the same base requirements.  We 
build out a default of two minidisks in volume group datavg for every server, 
but since that only adds up to 20G, we wanted to be able to have a random 
number of additional disks that'd get added into the datavg in a kickstart 
%post script for servers whose application code will need more than 20G.  Works 
great on a brand-new carved-straight-from-z/VM server, but when we're 
rebuilding a server that's already been built once, since the extra disks 
aren't wiped clean, linux senses that datavg already exists when the 
kickstart tries to create it.  Under RHEL 6.2, since everything was wiped, a 
rebuild was just as easy as a new build.  We can get around it by recreating 
(deleting and re-adding) the extra minidisks, or by running a %pre dasdfmt 
against any extra disk sensed (which, since it single-threads how I coded it, 
can take way longer than the z/VM minidisk recreate), but that won't help 
RedHat figure out the bug in their code.

That's just a little background, but I figured it might shed some light on what 
we're doing.  I've tried the enter/enter thing mentioned below and in the 
RHEL note, but it doesn't seem to take, which makes me wonder if the real 
problem isn't that the connection to the linux bootstrap-OS is gone when it 
spits out the error.  I seem to get a prompt that expects only a certain 
handful of responses, but I haven't figured out how to enter them either:

Storage Activation Failed
An error was encountered while activating your storage configuration.
vgcreate failed for datavg: 15:46:23,603 ERROR   :   A volume group 
called datavg already exists.
('custom', ['_File Bug', '_Exit installer'])

If I enter custom or _File Bug or _Exit installer, nothing happens.  
Really, when the error pops up, I don't seem to have any recourse from the z/VM 
guest console except logging off or entering #CP I CMS to restart the 
process...

FYI--I've attached the below email to the redhat ticket.  I'm hoping some of 
the stuff I didn't understand in here might spark ideas in the RedHat support 
staff.  Thanks to both you (Steffen) and David, as well as Rick Troth (who sent 
a straight-up) email for your responses so far.  And if you think of anything 
else, please let me know!
Shannon


-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Steffen 
Maier
Sent: Tuesday, July 16, 2013 1:23 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Any idea how to get into RedHat PDB mode on a kickstart through 
a TN3270 console in z/VM?

On 07/16/2013 05:25 PM, David Boyes wrote:

 We're trying to research an issue with our kickstarting that cropped
 up in
 RHEL6.4 (worked perfectly in RHEL6.2).  RedHat support has asked us
 to wait till the kickstart fails then issue CNTL+ALT+F1 to get into
 pdb mode so they can debug the anaconda stuff, but since we boot in
 z/VM, the terminal doesn't appear to translate those keys
 correctly--at least, I issue them and nothing happens.

 Call RH back and insist on talking to someone who can understand This
 is not an Intel system. Those keys don't exist on System z.

 Does anyone know how to send those to the host operating system
 through a tn3270e console?  Or know of a different way that we could
 get an ascii-based terminal for the automated kickstart if needed?
 When I ssh into the guest at the failure time, I can hop on as root
 but I'm not breaking into the install process--I can't enter
 anything that'd change the flow of the install.

 Really hoping someone on this forum might have an idea, since IBM
 won't respond (not a zVM issue, apparently, that their console
 doesn't work) and RedHat doesn't seem to know...

 The only way you could get an ASCII console that early in the process
 would be from the HMC, and it's still not going to help you.

 Try adding the cmdline option to the kickstart parm line. That will
 take you through the install line by line, which should help you
 findthe problem.

 If this is 6.4, there are known bugs in kickstart. Have them search
 for kickstart RHEL6.4 in their issue tracker.

cmdline will only allow you to roughly determine the point in the installation 
process but might make debugging the anaconda python code (pdb is the 
integrated Python DeBugger) harder or even impossible.
Also, you typically need RUNKS=1 along with cmdline (which is mutually 
exclusive with interactive TUI or GUI installation).

Re: RHEL 6.4 kickstart no longer seems to clear all the partitions with clearpart --all

2013-06-28 Thread Collinson.Shannon
 --driveorder=dasdb,dasdc,dasdd,dasde 
--append=crashkernel=auto
# The following is the partition information you requested # Note that any 
partitions you deleted are not expressed # here so unless you clear all 
partitions first, this is # not guaranteed to work zerombr clearpart --all 
--initlabel --drives=dasdb,dasdc,dasdd,dasde part / --fstype=ext4 --size=1024 
part pv.094006 --grow --size=200 part pv.094009 --grow --size=200 part swap 
--grow --size=200 part swap --grow --size=200 volgroup system_vg --pesize=4096 
pv.094006 pv.094009 logvol /opt --fstype=ext4 --name=opt_lv --vgname=system_vg 
--size=512 logvol /tmp --fstype=ext4 --name=tmp_lv --vgname=system_vg 
--size=512 logvol /usr --fstype=ext4 --name=usr_lv --vgname=system_vg 
--size=2048 logvol /var --fstype=ext4 --name=var_lv --vgname=system_vg 
--size=512

%packages
@base
  
%end
Do you think you can send us the kickstart file you used just for reference ?

Thank you,


Kind Regards,
Filipe Miranda


On Jun 27, 2013, at 2:15 PM, Collinson.Shannon 
shannon.collin...@suntrust.com wrote:

 Has anyone else run into this?  We were running RHEL 6.2 on s390x (z/VM 6.2, 
 actually) with no problems, but the same kickstart modified to point to the 
 RHEL 6.4 ISO seems to ignore the clearpart --initlabel --all/zerombr lines. 
  We have a standard (small) zlinux build that was defined in the kickstart 
 (minidisks specified with the by-path names), but would also have a variable 
 number of extra minidisks added to the application volume group depending on 
 the needs of the app that requested the server.  Some could live with the 20G 
 default datavg, but most need extra minidisks added to that volume group to 
 stand up their software.
 
 So, we had the 2 minidisks that made up the default 20G specified by name in 
 the command section of the kickstart, but added a %post section to discover 
 any additional disks and add them in as well.  All's perfect in RHEL 6.4 for 
 a brand-new server, but when we are asked to rebuild an existing server, the 
 kickstart fails, saying the datavg volume group was already found.  If I 
 manually zero-out the extra disks, or use DIRMAINT to recreate them, the 
 rebuild works fine, but clearpart or zerombr no longer seems to work against 
 every disk accessible to the kickstart (just every disk specified by name).
 
 I tried adding in a %pre section to whack those extra disks, but from what I 
 can see, that's happening after partitioning as well, and it's during the 
 partitioning (when it hits the creation of datavg for the 2 minidisks every 
 server has) that it fails.
 
 Oddly, in RHEL 6.2, this worked perfectly.  I could rebuild servers at the 
 click of a button.  RedHat is saying they can't recreate my problem, but my 
 guy (probably low-level) also can't find a mainframe to play with--he's doing 
 everything with x86, and I'm wondering if this isn't an s390-specific issue.  
 Has anyone else encountered this?  Am I expecting too much from the 
 kickstart, and was just exploiting some closed loophole with RHEL 6.2?  It's 
 not a show-stopper (since we don't rebuild that often and the DIRMAINT 
 re-create works fine), but I just find it weird.
 
 Thanks for any help with this!
 
 Shannon Collinson, AVP, Mainframe Engineering SunTrust Bank
 Office: 404.827.6070 | Mobile: 404.642.1280 | Mail Code GA-MT-1750 |
 245 Peachtree Center Ave Suite 1700 | Atlanta, GA 30303
 
 How Can We Help You Shine Today?
 
 LEGAL DISCLAIMER
 The information transmitted is intended solely for the individual or entity 
 to which it is addressed and may contain confidential and/or privileged 
 material. Any review, retransmission, dissemination or other use of or taking 
 action in reliance upon this information by persons or entities other than 
 the intended recipient is prohibited. If you have received this email in 
 error please contact the sender and delete the material from any computer.
 By replying to this e-mail, you consent to SunTrust's monitoring activities 
 of all communication that occurs on SunTrust's systems.
 SunTrust is a federally registered service mark of SunTrust Banks, Inc.
 [ST:XCL]
 
 --
 For LINUX-390 subscribe / signoff / archive access instructions, send 
 email to lists...@vm.marist.edu with the message: INFO LINUX-390 or 
 visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit 
 http://wiki.linuxvm.org/


--
For LINUX-390 subscribe / signoff / archive access instructions, send email to 
lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit http://wiki.linuxvm.org

RHEL 6.4 kickstart no longer seems to clear all the partitions with clearpart --all

2013-06-27 Thread Collinson.Shannon
Has anyone else run into this?  We were running RHEL 6.2 on s390x (z/VM 6.2, 
actually) with no problems, but the same kickstart modified to point to the 
RHEL 6.4 ISO seems to ignore the clearpart --initlabel --all/zerombr lines.  
We have a standard (small) zlinux build that was defined in the kickstart 
(minidisks specified with the by-path names), but would also have a variable 
number of extra minidisks added to the application volume group depending on 
the needs of the app that requested the server.  Some could live with the 20G 
default datavg, but most need extra minidisks added to that volume group to 
stand up their software.

So, we had the 2 minidisks that made up the default 20G specified by name in 
the command section of the kickstart, but added a %post section to discover any 
additional disks and add them in as well.  All's perfect in RHEL 6.4 for a 
brand-new server, but when we are asked to rebuild an existing server, the 
kickstart fails, saying the datavg volume group was already found.  If I 
manually zero-out the extra disks, or use DIRMAINT to recreate them, the 
rebuild works fine, but clearpart or zerombr no longer seems to work against 
every disk accessible to the kickstart (just every disk specified by name).

I tried adding in a %pre section to whack those extra disks, but from what I 
can see, that's happening after partitioning as well, and it's during the 
partitioning (when it hits the creation of datavg for the 2 minidisks every 
server has) that it fails.

Oddly, in RHEL 6.2, this worked perfectly.  I could rebuild servers at the 
click of a button.  RedHat is saying they can't recreate my problem, but my guy 
(probably low-level) also can't find a mainframe to play with--he's doing 
everything with x86, and I'm wondering if this isn't an s390-specific issue.  
Has anyone else encountered this?  Am I expecting too much from the kickstart, 
and was just exploiting some closed loophole with RHEL 6.2?  It's not a 
show-stopper (since we don't rebuild that often and the DIRMAINT re-create 
works fine), but I just find it weird.

Thanks for any help with this!

Shannon Collinson, AVP, Mainframe Engineering
SunTrust Bank
Office: 404.827.6070 | Mobile: 404.642.1280 | Mail Code GA-MT-1750 | 245 
Peachtree Center Ave Suite 1700 | Atlanta, GA 30303

How Can We Help You Shine Today?

LEGAL DISCLAIMER
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer.
By replying to this e-mail, you consent to SunTrust's monitoring activities of 
all communication that occurs on SunTrust's systems.
SunTrust is a federally registered service mark of SunTrust Banks, Inc.
[ST:XCL]

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Any tips on running 32-bit Java on a RHEL6 zlinux server?

2012-11-29 Thread Collinson.Shannon
Sorry if this is a dumb question, but I'm a z/OS sysprog trying to keep z/VM 
and zlinux running at our shop so most of my questions are probably dumb.  But 
we're trying to get a vendor tool (CA's Workload Automation agent) running on 
our new RHEL6.2 linux-on-z servers, and we can't find the right Java package 
for it.  Part of it was not understanding the Java packaging, but now that we 
know that the agent requirement of JRE 1.6 SR8, or higher (31-bit) means we 
need to find a Java 2 1.6.0 Runtime Environment package for s390 (31-bit, 
right?) at service release 8 or better, we can't seem to find one.  The closest 
we found was on the RedHat site for RHEL4-- 
java-1.6.0-ibm-1.6.0.9.2-1jpp.2.el4.s390.rpm.  However, when we try to install 
that, it attempts to replace the 64-bit Java 1.6 already on the server.  Can 
they not coexist like on z/OS?  If we whack the s390x version of Java 1.6 and 
replace it with this package, will it even work, or does the fact that it's 
compiled with the old linux kernel (at least, I think RHEL4 was Linux 2.4) make 
it incompatible with RHEL6?  We could of course ram it through and play with 
it, but I'd be afraid that we'd be setting ourselves up for core dumps in the 
future even if it looked okay at first startup...

If the answer is RTFM, please point me to the manual-I'd love to have been able 
to figure this out on my own, but google was giving me bad advice.

Thanks!

Shannon Collinson - Mainframe Engineer (OS) - SunTrust Banks, Inc - 404 
827-6070 (office) 404 642-1280 (cell) 
shannon.collin...@suntrust.commailto:shannon.collin...@suntrust.com

Please consider the cost of paper and the environment before printing this 
email!

LEGAL DISCLAIMER
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer.
By replying to this e-mail, you consent to SunTrust's monitoring activities of 
all communication that occurs on SunTrust's systems.
SunTrust is a federally registered service mark of SunTrust Banks, Inc.
[ST:XCL]

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: console-kit-daemon errors on SLES 11

2012-08-31 Thread Collinson.Shannon
on my RHEL boxes, it uses about a quarter of the available memory too (and we 
have ours stripped down to only 2G RAM, so that doesn't leave much for the 
rest).  And we noticed the messages only come out if you log in through the 
console--doesn't care about an ssh-login.  I looked at all the dependencies for 
it and decided we could live without them (mostly desktop-related services, and 
we don't boot in rl5 on the zseries) to finally kill it, because the redhat 
guys said suppressing the messages and living with the pointless overhead was 
the only other option.

Shannon Collinson (shannon.collin...@suntrust.com)

-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of Gregg 
Levine
Sent: Friday, August 31, 2012 10:26 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: console-kit-daemon errors on SLES 11

On Fri, Aug 31, 2012 at 6:19 AM, Shane G ibm-m...@tpg.com.au wrote:
 As per the document one can modify the the syslog-ng.conf file to supress
 the messages.

 React to the symptom (and hide the evidence) rather than fix the actual 
 problem.
 Has happened before, will do again.

 Shane ...

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390
 --
 For more information on Linux on System z, visit
 http://wiki.linuxvm.org/

Hello!
I agree. I've got a reasonable idea of what that program (and
application) does on my system. But what does it do besides complain
on a System Z?

-
Gregg C Levine gregg.drw...@gmail.com
This signature fought the Time Wars, time and again.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/
LEGAL DISCLAIMER
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer.
By replying to this e-mail, you consent to SunTrust's monitoring activities of 
all communication that occurs on SunTrust's systems.
SunTrust is a federally registered service mark of SunTrust Banks, Inc. Live 
Solid. Bank Solid. is a service mark of SunTrust Banks, Inc.
[ST:XCL]

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


best way to set up alternate ipl packs on z/VM 6.2?

2012-08-02 Thread Collinson.Shannon
In z/VM 5.4, we had two set of packs (a set consisting of a res pack with the 
code on it + a spool + a page volume) that we'd alternate between as we brought 
up different levels of maintenance.  We'd IPL off of the res pack for a set 
and it'd pop in the single spool and page volumes associated with it--since we 
only need a single spool volume and we don't really care what's on it from 
before the IPL, this seemed to work pretty easily to switch between levels of 
code.  But either we need to add another volume to our sets now, or I'm doing 
something funky with z/VM 6.2.  I looked at this layout and put in my best 
guesses:

RELVOL/620RL1 - looked like code, so I used my res pack here
RES/M01RES  - actually appeared to contain lpar-specific 
data, so I thought it wouldn't change and put an lpar volume name here

When I first IPLed, though, it wanted that lpar-specific (RES) volume, which 
I'm sure you guys all knew from the start.

Should I use a 4-volume pack set--one release, one res, one spool and one 
page--now that we're going to z/VM 6.2?  or can we do one release volume for 
the release, period--maybe that never changes while we're on 6.2--and just use 
1 per release + the original 3-pack set?  Does anyone else on z/VM 6.2 do the 
volume-switching for code method we're using who could perhaps recommend 
something?

Thanks!

Shannon Collinson - Mainframe Engineer (OS) - SunTrust Banks, Inc - 404 
827-6070 (office) 404 642-1280 (cell) 
shannon.collin...@suntrust.commailto:shannon.collin...@suntrust.com

Please consider the cost of paper and the environment before printing this 
email!

LEGAL DISCLAIMER
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer.
By replying to this e-mail, you consent to SunTrust's monitoring activities of 
all communication that occurs on SunTrust's systems.
SunTrust is a federally registered service mark of SunTrust Banks, Inc. Live 
Solid. Bank Solid. is a service mark of SunTrust Banks, Inc.
[ST:XCL]

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


any experiences hooking up OSA-ICC z/VM consoles through Visara MCC?

2012-01-05 Thread Collinson.Shannon
We're slowly converting our consoles over to OSA-ICC from
direct-attached consoles, and are hitting some snags with the z/VM
consoles.  The z/OS consoles, and the z/OS SNA terminals, popped over
with no trouble but we can't seem to get the z/VM consoles working with
all the features.  We've followed the OSA-ICC Redbook to the T on the
z/VM setup (not hard--just add the buggers to the Operator_Consoles and
Emergency_Message_Consoles lists) and have tried the OSA-ICC
configuration listed there, but wound up finding that the only way to
get them to work is to disable both DHD/host-disconnect (as recommended)
and RSP/response mode (totally against the book's rec!).  If we have
response mode enabled, it doesn't matter what read-timeout we give it (1
second, 10 seconds, 45, 60, 600)--all that means is the console will
sorta look available and connected for that amount of time, but if you
press enter or try to type anything in it, you'll hang (with x-wait)
till the timeout is up.  We tried experimenting with the session type,
DHD and RSP values, but have had our best success with type TN3270, DHD
 RSP disabled.  Since RSP seems to provide some benefit, we'd like to
get it working if possible and if not, understand why it shouldn't be
set in our circumstances.

 

We're using Visara MCC accessed through a Bluezone X interface to secure
our consoles, so I'm wondering if there isn't some sort of special
configuration we need with that.  We were using the same MCC+BluezoneX
interface on the old direct-attached consoles with no issues, and there
really aren't many options to tweak on either product, but we're
exploring that with the vendors as well.  We're looking into directly
accessing the console but that'll apparently mean disabling all the
currently-working OSA-ICC Visara MCC consoles, so that might take a
while.  Just on the off-chance you could save us some time, does anyone
have experience using the same setup in their own shops?  If so, do you
have any suggestions?

 

We're at z/VM 5.4 RSU1003, if that matters any.  

 

Thanks!

 

Shannon Collinson 

Systems Programmer, Mainframe Operating Systems

 

SunTrust Banks, Inc. 

Mail Code GA-MT-1750

245 Peachtree Center Avenue, 17th Floor

Atlanta, GA 30303

Tel: 404.827.6070  Mobile: 404.642.1280

shannon.collin...@suntrust.com mailto:shannon.collin...@suntrust.com 

 

Live Solid. Bank Solid. 
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
By replying to this e-mail, you consent to SunTrust's monitoring activities of 
all communication that occurs on SunTrust's systems. 
  
SunTrust is a federally registered service mark of SunTrust Banks, Inc. Live 
Solid. Bank Solid. is a service mark of SunTrust Banks, Inc. 
 
[ST:XCL] 
 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: building wireshark on SLES10SP2?

2010-04-08 Thread Collinson.Shannon
I hadn't put two-and-two together after I noticed that wireshark came
out of ethereal, but it doesn't seem to solve my problem anyhow.  I
pulled down the ethereal package on a separate zlinux guest (so I didn't
have to worry about interference with whatever I'd screwed up on the
wireshark install/deinstall) and started it up, only to find it seems to
have the same limitation.  This leads me to believe I just can't use the
filter on ip address (net/mask) when running a trace of the eth0
interface, or maybe that it expects the filter in a different format.
Do you happen to know how to translate the capture-filter (from gui,
because if I can't figure it out there, it's even more unlikely I could
use the command interface!):

net 10.x.y.0 mask 255.255.255.0

To something that would work with the eth0 interface?  That works great
filtering the any data, but nets me nothing with eth0...  

Shannon Collinson 
shannon.collin...@suntrust.com
Tel: 404.827.6070  Mobile: 404.642.1280
Fax: 404.581.1688


-Original Message-
From: Linux on 390 Port [mailto:linux-...@vm.marist.edu] On Behalf Of
Mark Post
Sent: Wednesday, April 07, 2010 6:17 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: building wireshark on SLES10SP2?

 On 4/7/2010 at 06:08 PM, Collinson.Shannon
shannon.collin...@suntrust.com
wrote: 
 Has anyone had any experience building WireShark on SLES10 SP2?  I
 noticed that there's actually a Novell-delivered rpm for SLES11, but I
 can't see anything like that for my release of zlinux, so I tried to
 build it myself from source.

The package ethereal on SLES10 is the equivalent of wireshark.  It is
available for SLES10 SP2 on System z.  But, you don't really need the
GUI that comes with it.  You could just use the tcpdump or tcpdump-qeth
command to capture the raw data.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
SunTrust is a federally registered service mark of SunTrust Banks, Inc. Live 
Solid. Bank Solid. is a service mark of SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


building wireshark on SLES10SP2?

2010-04-07 Thread Collinson.Shannon
Has anyone had any experience building WireShark on SLES10 SP2?  I
noticed that there's actually a Novell-delivered rpm for SLES11, but I
can't see anything like that for my release of zlinux, so I tried to
build it myself from source.  It seemed to go pretty easily once I'd
pulled down a few extra packages (bison, gcc and flex, at least) since
we've got very basic Linux servers by default, but I'm not actually sure
it's fully functional.  When I check the build, the number of withouts
scare me:

 

wireshark 1.2.6

 

Copyright 1998-2010 Gerald Combs ger...@wireshark.org and
contributors.

This is free software; see the source for copying conditions. There is
NO

warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR
PURPOSE.

 

Compiled with GTK+ 2.8.11, with GLib 2.8.6, with libpcap 0.9.4, with
libz 1.2.3,

without POSIX capabilities, without libpcre, without SMI, without
c-ares,

without ADNS, without Lua, without GnuTLS, without Gcrypt, without
Kerberos,

without GeoIP, without PortAudio, without AirPcap.

NOTE: this build doesn't support the matches operator for Wireshark
filter

syntax.

 

Running on Linux 2.6.16.60-0.21-default, with libpcap version 0.9.4.

 

Built using gcc 4.1.2 20070115 (SUSE Linux).

 

And while we seemed to get good information for packet tracing via the
any interface, I can't seem to do any filtering of straight eth0
data-the same ip address filter that limited the any trace gives me
absolutely nothing with the actual network card.  The problem is that my
network analysts want to combine data from wireshark with their OpNet
AIX server data to get an end-to-end picture of network traffic, and the
Linux cooked-mode capture data from the any-interface is killing them.


 

Is this a problem with my build, or a limitation with linux itself and
how it generates packet data?  I'm leaning towards the former since I'm
basically a trained zlinux monkey (I don't really get what all is
happening under the covers when I issue the ./configure and make
commands-that's what happens when you throw a z/OS programmer into the
linux deep end!).  The only thing I could figure is to find someone
running SLES11, get them to install the wireshark rpm, do the same -v
display on it, and try to work my way backwards till my build matches
theirs (except for kernel and probably versions, of course).  If someone
has some insight to share or wants to send me their build info on
sles11, I'd love to hear from you!

 

Thanks!

 

Shannon Collinson 

Systems Programmer, Mainframe Operating Systems

 

SunTrust Banks, Inc. 

Mail Code GA-ATL-4030

250 Piedmont Ave. NE, Suite 1600

Atlanta, GA 30308

Tel: 404.827.6070  Mobile: 404.642.1280

Fax: 404.581.1688

shannon.collin...@suntrust.com mailto:shannon.collin...@suntrust.com 

 

Live Solid. Bank Solid. 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
SunTrust is a federally registered service mark of SunTrust Banks, Inc. Live 
Solid. Bank Solid. is a service mark of SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


weird problem with pam_tally in SLES10SP2

2010-01-29 Thread Collinson.Shannon
I'm new to supporting linux, being a mainframe z/OS sysprog, so this may
just be a user error and I sincerely hope someone can say Duh! once I
explain this...

 

We're trying to build Linux-on-zSeries SLES10SP2 guests as close as
possible to the same level of Linux guests on Intel servers.  As part of
this, I'm including the following line in our /etc/pam.d/common_auth
file:

 

authrequiredpam_tally.so onerr=fail deny=10

 

That's the only change we make to the pam modules.  As I understand it,
that should block a user's access once they reach 10 unsuccessful login
attempts.  Well, the problem is that every login attempt is marked
unsuccessful even if the user had no trouble logging in, if they do so
via ssh (actually with a putty client).  That same user gets a
successful login when they try logging in directly from the (VM)
console.  So what I've done is created a linux server that's only really
good for 10 accesses-after that, the user can no longer get in till
someone hops on at the console with root and resets their failed-login
count!

 

I added debug to pam_env.so and pam_unix2.so modules to get a little
more info, but it all looks good to me.  Here's the faillog display
after I've reset the user:

 

Login   Failures Maximum Latest   On

lxinst  00   01/29/10 13:34:39 -0500  cnu83757xg.

 

Then I try to log in and get the following messages in
/var/log/messages:

 

Jan 29 13:38:26 lxd1100 sshd[2335]: pam_unix2(sshd:auth):
pam_sm_authenticate() called

Jan 29 13:38:26 lxd1100 sshd[2335]: pam_unix2(sshd:auth):
username=[lxinst]

Jan 29 13:38:27 lxd1100 sshd[2335]: pam_unix2(sshd:auth):
pam_sm_authenticate: PAM_SUCCESS

Jan 29 13:38:27 lxd1100 sshd[2333]: Accepted keyboard-interactive/pam
for lxinst from 10.48.100.90 port 2458 ssh2

Jan 29 13:38:27 lxd1100 sshd[2336]: pam_unix2(sshd:setcred):
pam_sm_setcred() called

Jan 29 13:38:27 lxd1100 sshd[2336]: pam_unix2(sshd:setcred):
username=[lxinst]

Jan 29 13:38:27 lxd1100 sshd[2336]: pam_unix2(sshd:setcred):
pam_sm_setcred: PAM_SUCCESS

 

And here's the faillog display:

 

Login   Failures Maximum Latest   On

lxinst  10   01/29/10 13:38:26 -0500  cnu83757xg.

 

Any idea where I've screwed up, or where/how I can look for the real
failure?

 

 

Thanks!

 

Shannon Collinson 

Systems Programmer, Mainframe Operating Systems

 

SunTrust Banks, Inc. 

Mail Code GA-ATL-4030

250 Piedmont Ave. NE, Suite 1600

Atlanta, GA 30308

Tel: 404.827.6070  Mobile: 404.642.1280

Fax: 404.581.1688

shannon.collin...@suntrust.com mailto:shannon.collin...@suntrust.com 

 

Live Solid. Bank Solid. 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
SunTrust is a federally registered service mark of SunTrust Banks, Inc. Live 
Solid. Bank Solid. is a service mark of SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: weird problem with pam_tally in SLES10SP2

2010-01-29 Thread Collinson.Shannon
Ooh!  That's it, sorta!  I think the syntax has changed slightly, but I
needed just pam_tally.so in the common-account file, and now it resets
the bugger after a successful login as it's supposed to.  It also tracks
up to the 10 bad passwords before it locks the user, and if they enter a
correct password before then, it resets the count to 0.  Just like it's
supposed to work!  Thank you so much, Marcy!

Shannon Collinson 
shannon.collin...@suntrust.com
Tel: 404.827.6070  Mobile: 404.642.1280
Fax: 404.581.1688 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
SunTrust is a federally registered service mark of SunTrust Banks, Inc. Live 
Solid. Bank Solid. is a service mark of SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SWAPGEN version 0803

2008-05-02 Thread Collinson.Shannon
I had this same problem and noticed that an invalid character was being
appended to the uploaded swpg0803 file (on the last line--3589).  I just
deleted that line and everything ran fine...
   Shannon Collinson--SunTrust Bank

-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
Gregg Levine
Sent: Friday, May 02, 2008 11:31 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: SWAPGEN version 0803

On Fri, May 2, 2008 at 11:26 AM, Bernard Wu [EMAIL PROTECTED] wrote:
 Hi Dave,
  Same error.

  SWPG0803

   3589 +++   ?
   3589 +++?
601 +++   temp = sourceline(l)
579 +++ sourcel = GetSourceLine(badline)
  DMSREX460E Error 13 running SWPG0803 EXEC, line 3589: Invalid
character in
  program

  The only difference between not specifying  quote site fix 80  is
that
  the file becomes a variable file , with 60 blocks when it gets to VM
:

  Cmd   Filename Filetype Fm Format LreclRecords Blocks   Date
  Time
   SWPG0803 EXEC D1 V 75   3589 60
5/02/08
  11:10:27

  whereas specifying quote site fix 80 makes it a fixed blocked file
, with
  71 blocks

  Cmd   Filename Filetype Fm Format LreclRecords Blocks   Date
  Time
   SWPG0803 EXEC D1 F 80   3589 71
5/02/08
  9:57:48

  Also, being a newbie to VM, I'm not familiar with the MAILABLE
extension.
  Is it totally self contained, or does it need other utilities for it
to
  work.
  The VM guest that I ftp'd to is a brand new guest, with nothing in it
  except for PROFILE.EXEC.

  Bernie




  The information contained in this e-mail message is intended only for
  the personal and confidential use of the recipient(s) named above.
This
  message may be an attorney-client communication and/or work product
and
  as such is privileged and confidential. If the reader of this message
  is not the intended recipient or an agent responsible for delivering
it
  to the intended recipient, you are hereby notified that you have
  received this document in error and that any review, dissemination,
  distribution, or copying of this message is strictly prohibited. If
you
  have received this communication in error, please notify us
immediately
  by e-mail, and delete the original message.


--
  For LINUX-390 subscribe / signoff / archive access instructions,
  send email to [EMAIL PROTECTED] with the message: INFO LINUX-390
or visit
  http://www.marist.edu/htbin/wlvindex?LINUX-390

Hello!
I seem to remember a discussion regarding files of that type, those
and end in  MAILABLE, they are the VM equivalent of self extracting
files, or that's what I inferred. Try running the file and see what it
says, and do indeed remember to set it at the FIX 80 setting on its
way to your guest.


--
Gregg C Levine [EMAIL PROTECTED]
This signature was once found posting rude
 messages in English in the Moscow subway.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
SunTrust and Seeing beyond money are federally registered service marks of 
SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: how can I mount /tmp as a tmpfs in SLES9?

2008-03-18 Thread Collinson.Shannon
Thanks for all the explanation, folks--after reading through them, I'm
definitely sticking with physical data for /tmp, as it seems a lot
safer.  And while we're not crunched for memory yet, I'm sure that day
will come...

Also, Mark, thanks for the link to the funtoo site for tmpfs info--if
I'd decided to try memory-backed /tmp, that would definitely have
cleared up my questions.  Even if that had popped up early in my search
(I was doing tmpfs vm tmp, I think), I'm not sure I would have thought
to check a site called funtoo--good to know for future searches!

Thanks again, everyone!
Shannon 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
SunTrust and Seeing beyond money are federally registered service marks of 
SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


how can I mount /tmp as a tmpfs in SLES9?

2008-03-17 Thread Collinson.Shannon
And is this a bad idea?  In the USS world at our shop, we've had our
/tmp directory mounted as a temporary file system (backed in memory) for
a decade with no problems, but we don't run all that much in USS.  I
know that it's possible to mount tmp as memory-it's mentioned in a few
read-only-root redbooks-but I can't seem to find the mechanics on it.
So either it's such a no-brainer that it doesn't bear mentioning for the
most part, or it's a bad idea (explained in some doc I haven't been able
to google).  Which is it?  Thanks!

 

Shannon Collinson l Mainframe Operating Systems l ETI l SunTrust Banks l
404.827.6070 (office) l 404.642.1280 (mobile)

Seeing beyond money (sm) 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
SunTrust and Seeing beyond money are federally registered service marks of 
SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Is it possible to do an initial SLES9 install on a partially-LVM root?

2008-03-14 Thread Collinson.Shannon
We don't have mod54's yet--this is just a mod9, and we'll have static
PAV aliases defined for the mod54s--but I assume the same logic applies.
Does this mean that I can't use LVM for the four mountpoints (/usr,
/var, /opt and /home) at all until we have PAV set up?  I could define
minidisks on different volumes, but at the moment we're
storage-crunched--I've only got 3 mod9s to work with.  If I stick with
the current plan (/ on a separate disk, the other 4 on LVM), I'd have at
least two filesystems overlapping--3 volumes but 5 mountpoints.  I don't
have a problem with the I/O requests waiting in that case, but it
appears that the SLES9 install is trying to mount the LVM filesystems at
once, since otherwise I would have thought I'd just have a slow
install, not get the file descriptor 3 left open error.  Everything
worked fine when I was installing all of root on a single minidisk...

-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
Bruce Hayden
Sent: Friday, March 14, 2008 9:01 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Is it possible to do an initial SLES9 install on a
partially-LVM root?

You need to remember that you can't start more than 1 I/O to an
address (real or virtual!) at a time.  This is a hardware restriction.
 This is why PAV was invented - so that the PAV alias addresses can be
used to start parallel I/Os to the same base volume.  So - if you
don't have PAV, then it will not help to split up your mod 54 into
minidisks.  Linux may try to start more than 1 I/O at a time (since it
can start one to each virtual address) but  z/VM will only execute
them one at a time and the other I/O requests will wait.

If you had HyperPAV, then splitting up the mod 54 into minidisks
works, because then z/VM will use the PAV pool for that device to do
I/Os to that device in parallel.  If you have static PAV, the same
thing also works, but the alias addresses have to be set up in
advance.  If you had SLES 10, then instead, I'd recommend that you
define virtual PAV aliases to the guest with 1 minidisk and use
multipath on Linux for overlapped I/O.  But - this is not supported on
SLES 9. 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
SunTrust and Seeing beyond money are federally registered service marks of 
SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Is it possible to do an initial SLES9 install on a partially-LVM root?

2008-03-14 Thread Collinson.Shannon
Well, it might just boil down to trying to do 5 stripes when I had only
a single volume--I just tried again today with no striping and the setup
worked.  I could swear I tried that yesterday, but perhaps I just redid
definitions on the IPLed install-linux system (hit abort after I got the
LVM error, redefined the LVM filesystems, and retried the install)
instead of starting from scratch like I did now.

The reason we want to do striping is so that linux can take advantage of
the static PAV aliases when we do get our mod-54s.  I'm trying to build
something now that we could clone to the mod54s later.  However, I was
already intending to set up the application LVM group with at least 5
minidisks (even if they were all from the same physical mod54).  The
root filesystem isn't really big enough to justify creating a lot of
minidisks for it.  I think I was striping the LVM just as a
matter-of-course (stripe one LVM, stripe them all), not really thinking
about the mechanics of it.  

Think I'm sorted now--thanks for all the help!

-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
RPN01
Sent: Friday, March 14, 2008 9:35 AM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Is it possible to do an initial SLES9 install on a
partially-LVM root?

One disk or twenty, LVM for all or just a few filesystems, it should
still
work. So you still have some problem in your definitions that hasn't
been
revealed here yet.

Since you're running into the problem during the install, there aren't a
lot
of files you could post here for review. But could you walk through, in
detail, what you are doing during the install, from the point of
formatting
the DASD through to the error? What options are you selecting on which
panels, and what steps you go through to define each physical volume,
volume
group and logical volume? Perhaps, given this information, others here
can
spot the problem, or test your config on their own systems and find the
mis-step.

If you can, take pictures of the major windows involved in the
process.
I'm not sure that the group here will allow posting them, but you could,
at
the very least, create a Flickr account and post them there, and then
supply
the URL to get to them here.

--
Robert P. Nix  Mayo Foundation.~.
RO-OE-5-55 200 First Street SW/V\
507-284-0844   Rochester, MN 55905   /( )\
-^^-^^
In theory, theory and practice are the same, but
 in practice, theory and practice are different.




On 3/14/08 8:15 AM, Collinson.Shannon [EMAIL PROTECTED]
wrote:

 We don't have mod54's yet--this is just a mod9, and we'll have static
 PAV aliases defined for the mod54s--but I assume the same logic
applies.
 Does this mean that I can't use LVM for the four mountpoints (/usr,
 /var, /opt and /home) at all until we have PAV set up?  I could define
 minidisks on different volumes, but at the moment we're
 storage-crunched--I've only got 3 mod9s to work with.  If I stick with
 the current plan (/ on a separate disk, the other 4 on LVM), I'd have
at
 least two filesystems overlapping--3 volumes but 5 mountpoints.  I
don't
 have a problem with the I/O requests waiting in that case, but it
 appears that the SLES9 install is trying to mount the LVM filesystems
at
 once, since otherwise I would have thought I'd just have a slow
 install, not get the file descriptor 3 left open error.  Everything
 worked fine when I was installing all of root on a single minidisk...

 -Original Message-
 From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
 Bruce Hayden
 Sent: Friday, March 14, 2008 9:01 AM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: Is it possible to do an initial SLES9 install on a
 partially-LVM root?

 You need to remember that you can't start more than 1 I/O to an
 address (real or virtual!) at a time.  This is a hardware restriction.
  This is why PAV was invented - so that the PAV alias addresses can be
 used to start parallel I/Os to the same base volume.  So - if you
 don't have PAV, then it will not help to split up your mod 54 into
 minidisks.  Linux may try to start more than 1 I/O at a time (since it
 can start one to each virtual address) but  z/VM will only execute
 them one at a time and the other I/O requests will wait.

 If you had HyperPAV, then splitting up the mod 54 into minidisks
 works, because then z/VM will use the PAV pool for that device to do
 I/Os to that device in parallel.  If you have static PAV, the same
 thing also works, but the alias addresses have to be set up in
 advance.  If you had SLES 10, then instead, I'd recommend that you
 define virtual PAV aliases to the guest with 1 minidisk and use
 multipath on Linux for overlapped I/O.  But - this is not supported on
 SLES 9.



 LEGAL DISCLAIMER
 The information transmitted is intended solely for the individual or
entity to
 which it is addressed and may contain confidential and/or privileged
material.
 Any

Is it possible to do an initial SLES9 install on a partially-LVM root?

2008-03-13 Thread Collinson.Shannon
I was trying to do this with a new guest by putting the following
directories in LVM (group system with one volume-204):

 

/usr

/var

/opt

/home

 

The rest of the root filesystem (including /boot) is on a regular
physical volume (200), and the swap disks are on two v-disks (201 and
202) and one physical disk (203).  Everything looks fine in the
partitioner, but when I actually start the install and get to the
preparing your hard disk panel, I get:

 

  LVM Error

  lvcreate -A n -i 5 -l 64 -n usr -l 359 system

File descriptor 3 left open

 

I thought it might be the striping, but I get the same error (with
slightly different lvcreate parms) when I don't stripe.  I can't figure
out what the problem is.  I'm hoping it's obvious to you guys...

 

Also, I tried to set up the v-disks as FBA (using SWAPGEN) since I
thought that was required for SLES9-64bit, but wound up having to use
DIAG instead to get through Partitioner.  If anyone knows how to make
FBA disks work for the initial install, let me in on the secret.  

 

I was able to effectively install SLES9-sp4 before I tried the funky
stuff (vdisks and LVM), but we're trying to come up with a
best-practices installation model, leading to new and exciting errors
for the z/OS sysprog (me) to blindly stumble into.  If the actual answer
to both of these problems is you can't do that from here-get a good
bare-bones install and then modify the server accordingly, I'll
understand...  Thanks!

 

Shannon Collinson l Mainframe Operating Systems l ETI l SunTrust Banks l
404.827.6070 (office) l 404.642.1280 (mobile)

Seeing beyond money (sm) 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
SunTrust and Seeing beyond money are federally registered service marks of 
SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Is it possible to do an initial SLES9 install on a partially-LVM root?

2008-03-13 Thread Collinson.Shannon
Yes, there is a partition on the volume.  Sorry I wasn't clear about
that! 

So we need more than one minidisk even to set up the filesystem as
striped?  I'd say that was our problem, except that I did try it without
striping (so it shows up as 1 for stripes in YaST2).

We're not necessarily planning on a lot of data for single guests--it's
just that we'll be getting dasd on the VM lpar as mod54s (except for the
CP-OWNED volumes which will be mod9s).  So single physical mod54s will
be carved up into multiple minidisks for several different guests.

We might have PAV support in VM 5.3, but apparently (according to the
IBM storage guy we talked to last week), we either need to buy HiperPAV
(not gonna happen any time soon) or statically assign aliases in the
storage box to use it.  The static alias assignment hasn't happened yet,
so I figure we don't have PAV...


-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
Mark Post
Sent: Thursday, March 13, 2008 6:02 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Is it possible to do an initial SLES9 install on a
partially-LVM root?

 On Thu, Mar 13, 2008 at  5:50 PM, in message
[EMAIL PROTECTED],
Collinson.Shannon [EMAIL PROTECTED] wrote: 
 Yeah, but we only have one disk in the LVM.  And while we want
striping

So, does that mean you did put a partition on the volume?  It's not
really clear from your answer.

 in place (because we'll be using mod54s with PAV in the near future),
at

To be able to use striping, you need more than one virtual
disk/minidisk.  Mod-54s sounds like you're planning on a lot of data.
SCSI over FCP should be considered at that point.

 the moment we don't have PAV either.  Maybe the real problem is that

If you're running on z/VM (and you are), you have PAV support in your
software.

 I've got 4 different filesystems (/var, /usr, /opt and /home) in a
 single minidisk with only one reader at the moment?  

No.

 If so, could I fake out linux by chopping the minidisk into 4 (or
more)
 minidisks for the LVM, even though they'll still be on one physical VM
 volume?  Or since there's still only a base address for that VM
volume,
 will that not matter?  

I wouldn't do that.  You have a problem somewhere else, but doing that
isn't going to help (I believe).


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
SunTrust and Seeing beyond money are federally registered service marks of 
SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


qeth setup cause code documentation

2008-02-04 Thread Collinson.Shannon
Can someone point me to where the qeth setup return codes are
documented?  I've found a few mentioned in scattered postings online,
but none that represent our error (0xf6).  I figure knowing where TFM is
will help me R it...  Thanks!

 

For those interested, we're getting the following when attempting to set
up the first Linux guest on a new VM lpar.  Oddly, this only popped up
after we had to reconfigure the ethernet settings (we had the DNS ip
address wrong) during the Linux installation, but now we can't get rid
of it.  (Even an IPL of the lpar didn't help)

 

...

Enter MAC address (leave empty for VSwitch) ():  

Device 0.0.7000 configured  

qeth: received an IDX TERMINATE with cause code 0xf6

qeth: sense data available on channel 0.0.7000. 

qeth:  cstat 0x0

 dstat 0xE  

qeth: irb: 00 c2 60 17  7b 4b d0 38  0e 00 10 00  00 80 00 00   

qeth: irb: 01 02 00 00  00 00 00 00  00 00 00 00  00 00 00 00   

qeth: sense data: 02 00 00 00  00 00 00 00  00 00 00 00  00 00 00 00

qeth: sense data: 00 00 00 00  00 00 00 00  00 00 00 00  00 00 00 00

qeth: Initialization in hardsetup failed! rc=-5 

eth0: error fetching interface information: Device not found



eth0 not available, check device addresses/cards.   



Do you want to retry the qeth-setup (Yes/No) ?   

 

 

Shannon Collinson l Mainframe Operating Systems l ETI l SunTrust Banks l
404.827.6070 (office) l 404.642.1280 (mobile)

Seeing beyond money (sm) 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
SunTrust and Seeing beyond money are federally registered service marks of 
SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: qeth setup cause code documentation

2008-02-04 Thread Collinson.Shannon
We get the following when I issue that command:

cp q nic details  
Adapter 7000  Type: QDIO  Name: UNASSIGNED  Devices: 3
  MAC: 02-00-00-00-00-02 VSWITCH: SYSTEM VSTRX202 
  RX Packets: 0  Discarded: 0  Errors: 0  
  TX Packets: 6  Discarded: 0  Errors: 0  
  RX Bytes: 0TX Bytes: 460
  Unassigned Devices: 
  Device: 7000  Unit: 000   Role: Unassigned  
  Device: 7001  Unit: 001   Role: Unassigned  
  Device: 7002  Unit: 002   Role: Unassigned

Think that means we've got our 7000, but it just isn't in use yet (due
to the failure to config it), right?  Would we get this info if we lost
access to the vswitch?

-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
Marcy Cortes
Sent: Monday, February 04, 2008 4:08 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: qeth setup cause code documentation

 
Sure looks like you don't have a 7000.
What do you get with 
#CP Q NIC DETAILS

If on a vswitch with no ESM, you may have lost your access.
Set vswitch n grant vmid



Marcy Cortes 

This message may contain confidential and/or privileged information. If
you are not the addressee or authorized to receive this for the
addressee, you must not use, copy, disclose, or take any action based on
this message or any information herein. If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message. Thank you for your cooperation.


-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
Collinson.Shannon
Sent: Monday, February 04, 2008 1:01 PM
To: LINUX-390@VM.MARIST.EDU
Subject: [LINUX-390] qeth setup cause code documentation

Can someone point me to where the qeth setup return codes are
documented?  I've found a few mentioned in scattered postings online,
but none that represent our error (0xf6).  I figure knowing where TFM is
will help me R it...  Thanks!

 

For those interested, we're getting the following when attempting to set
up the first Linux guest on a new VM lpar.  Oddly, this only popped up
after we had to reconfigure the ethernet settings (we had the DNS ip
address wrong) during the Linux installation, but now we can't get rid
of it.  (Even an IPL of the lpar didn't help)

 

...

Enter MAC address (leave empty for VSwitch) ():  

Device 0.0.7000 configured  

qeth: received an IDX TERMINATE with cause code 0xf6

qeth: sense data available on channel 0.0.7000. 

qeth:  cstat 0x0

 dstat 0xE  

qeth: irb: 00 c2 60 17  7b 4b d0 38  0e 00 10 00  00 80 00 00   

qeth: irb: 01 02 00 00  00 00 00 00  00 00 00 00  00 00 00 00   

qeth: sense data: 02 00 00 00  00 00 00 00  00 00 00 00  00 00 00 00

qeth: sense data: 00 00 00 00  00 00 00 00  00 00 00 00  00 00 00 00

qeth: Initialization in hardsetup failed! rc=-5 

eth0: error fetching interface information: Device not found



eth0 not available, check device addresses/cards.   



Do you want to retry the qeth-setup (Yes/No) ?   

 

 

Shannon Collinson l Mainframe Operating Systems l ETI l SunTrust Banks l
404.827.6070 (office) l 404.642.1280 (mobile)

Seeing beyond money (sm) 
  
  
  
LEGAL DISCLAIMER
The information transmitted is intended solely for the individual or
entity to which it is addressed and may contain confidential and/or
privileged material. Any review, retransmission, dissemination or other
use of or taking action in reliance upon this information by persons or
entities other than the intended recipient is prohibited. If you have
received this email in error please contact the sender and delete the
material from any computer. 
  
SunTrust and Seeing beyond money are federally registered service marks
of SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions, send
email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email

3590 tape drive support in zLinux/zVM?

2008-01-15 Thread Collinson.Shannon
This is cross-posted on the linux-390 and ibmvm listservs...

 

 

Is there a manual or webpage somewhere that references all the types of
hardware that will work under various versions of zLinux and zVM?  Or
could someone tell me where I could confirm that a Magstar 3590 model
E11 standalone tape drive would work under zVM 5.3 and/or SLES9 SP3?
We're new to the zVM/zLinux world-just finished with our POC and now
trying to ready an LPAR for production use-and didn't realize till just
recently that we'd need a tape drive to do a standalone dump of the zVM
lpar.  We have some no-longer-in-use 3590s available so thought we'd put
them on the lpar as a temporary solution, but don't know how to ensure
they're supported.  Thanks!  

 

Shannon Collinson l Mainframe Operating Systems l ETI l SunTrust Banks l
404.827.6070 (office) l 404.642.1280 (mobile)

Seeing beyond money (sm) 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
SunTrust and Seeing beyond money are federally registered service marks of 
SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Betr.: 3590 tape drive support in zLinux/zVM?

2008-01-15 Thread Collinson.Shannon
It'll probably be ficon-attached.  At the moment, the 3590s are just
standing idle, no longer hooked up, so I guess it's possible we could
scsi-attach them, but I think that's pretty far-fetched.  I'm not really
sure that we'd use these for Linux--they'd certainly suck for TSM, being
manual loaders--but I just wanted to know if it was possible.  I think
really we're looking at them for VM, in case we have to do a standalone
dump...

Thanks for the info!

-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
Pieter Harder
Sent: Tuesday, January 15, 2008 3:43 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Betr.: 3590 tape drive support in zLinux/zVM?

3590 attached how? By controller with Ficon/Escon, or SCSI through some
FC converter?
 
If Ficon, then this works and has been for years. I myself have 4x
3590-E11 attached to 3590-A60 salvaged from a disused 3494 library. With
the 3590 drivers in Suse or Redhat I assume it will work too, but I
never had a need to try it as I have FC attached LTO drives as well.
 
If SCSI it will work with the IBMtape driver from any reasonable Linux
distro, but will not work for z/VM itself, because z/VM has not tools to
take advantage of FC attached drives. EG DDR does not know anything but
Ficon and Escon\
 
 
 
Best regards,
Pieter Harder
 
[EMAIL PROTECTED] 
tel  +31-73-6837133 / +31-6-47272537

 Collinson.Shannon [EMAIL PROTECTED] 01/15/08 9:23

This is cross-posted on the linux-390 and ibmvm listservs...





Is there a manual or webpage somewhere that references all the types of
hardware that will work under various versions of zLinux and zVM?  Or
could someone tell me where I could confirm that a Magstar 3590 model
E11 standalone tape drive would work under zVM 5.3 and/or SLES9 SP3?
We're new to the zVM/zLinux world-just finished with our POC and now
trying to ready an LPAR for production use-and didn't realize till just
recently that we'd need a tape drive to do a standalone dump of the zVM
lpar.  We have some no-longer-in-use 3590s available so thought we'd put
them on the lpar as a temporary solution, but don't know how to ensure
they're supported.  Thanks!  



Shannon Collinson l Mainframe Operating Systems l ETI l SunTrust Banks l
404.827.6070 (office) l 404.642.1280 (mobile)

Seeing beyond money (sm) 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or
entity to which it is addressed and may contain confidential and/or
privileged material. Any review, retransmission, dissemination or other
use of or taking action in reliance upon this information by persons or
entities other than the intended recipient is prohibited. If you have
received this email in error please contact the sender and delete the
material from any computer. 
  
SunTrust and Seeing beyond money are federally registered service marks
of SunTrust Banks, Inc. 
[ST:XCL] 





--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

Brabant Water N.V.
Postbus 1068
5200 BC  's-Hertogenbosch
http://www.brabantwater.nl
Handelsregister: 16005077

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Is anyone connecting to a Hitachi SAN-box with FCP NPiV?

2007-10-23 Thread Collinson.Shannon
We wanted to connect to our SAN-box using FCP NPiV for either
open-systems server storage (using TSM) or to implement the new GDPS
function of DR-mirroring the open-systems storage.  However, the IBM
representative we talked to said that they couldn't support us if we ran
into any problems (either with connectivity or possibly data corruption)
unless we were connecting to an IBM box.  It kinda scared us off the
idea.  Is anyone successfully using non-IBM storage (especially Hitachi)
with FCP NPiV?

 

Shannon Collinson l Mainframe Operating Systems l ETI l SunTrust Banks l
404.827.6070 (office) l 404.642.1280 (mobile)

Seeing beyond money (sm) 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
SunTrust and Seeing beyond money are federally registered service marks of 
SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Tom Shepherd - in IBM Meeting in Dallas April 24-26

2006-04-25 Thread Collinson.Shannon
If you can tell me how to avoid it, I'd love to do so.  My company
recently opened up our out of office messages to go to external email
recipients (like this list) and I don't believe I can stop it from
within Microsoft OutLook.  As I'm required, by management, to set up an
out-of-office message when I'm out, the only thing I can think to do
would be to change my listserv settings whenever I'm out of town, and
(hopefully) remember to change them back when I get back (manual process
= lots of room for failure!).  

-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
McKown, John
Sent: Tuesday, April 25, 2006 4:48 PM
To: LINUX-390@VM.MARIST.EDU
Subject: Re: Tom Shepherd - in IBM Meeting in Dallas April 24-26

 -Original Message-
 From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On 
 Behalf Of Per Jessen
 Sent: Tuesday, April 25, 2006 3:33 PM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: Tom Shepherd - in IBM Meeting in Dallas April 24-26
 
 
 McKown, John wrote:
 
  Then he should set up his list subscriptions as NOMAIL so 
 that we don't
  get splattered with out of office emails.
 
 Did anyone get more than just the one?  Most automatic 
 reply-systems are
 set up to only respond once.  Did you get splattered, John?
 
 
 /Per Jessen

In most cases, it is impossible for me to tell. I very aggressively
filter my email. I just checked, and it appears that this list (unlike
many others) does indeed only send a single out of office per person.
I wonder how they do that? I'm subscribed to a number of lists and do
tend to get splattered quite a bit. Or would if it weren't for the
previously mentioned filters. The original email made it though my
filters because it is not in a standard form (i.e. it doesn't say out
of office)

However, I do maintain that it is impolite. Even if there is only a
single email of this type per person, it is distributed to a LARGE
audience. This costs the Marist university bandwidth, even if nothing
else.

If I came across as too hard on Tom, I will apologize. But I will
stick with feeling that it is impolite do this. Especially when it can
be avoided (with some work).

--
John McKown
Senior Systems Programmer
HealthMarkets
Keeping the Promise of Affordable Coverage
Administrative Services Group
Information Technology

This message (including any attachments) contains confidential
information intended for a specific individual and purpose, and its
content is protected by law.  If you are not the intended recipient, you
should delete this message and are hereby notified that any disclosure,
copying, or distribution of this transmission, or taking any action
based on it, is strictly prohibited. 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390 
  
  
  
LEGAL DISCLAIMER 
The information transmitted is intended solely for the individual or entity to 
which it is addressed and may contain confidential and/or privileged material. 
Any review, retransmission, dissemination or other use of or taking action in 
reliance upon this information by persons or entities other than the intended 
recipient is prohibited. If you have received this email in error please 
contact the sender and delete the material from any computer. 
  
Seeing Beyond Money is a service mark of SunTrust Banks, Inc. 
[ST:XCL] 
 
 
 
 

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390