If you can tell me how to avoid it, I'd love to do so. My company
recently opened up our out of office messages to go to external email
recipients (like this list) and I don't believe I can stop it from
within Microsoft OutLook. As I'm required, by management, to set up an
out-of-office message
We wanted to connect to our SAN-box using FCP NPiV for either
open-systems server storage (using TSM) or to implement the new GDPS
function of DR-mirroring the open-systems storage. However, the IBM
representative we talked to said that they couldn't support us if we ran
into any problems (either
This is cross-posted on the linux-390 and ibmvm listservs...
Is there a manual or webpage somewhere that references all the types of
hardware that will work under various versions of zLinux and zVM? Or
could someone tell me where I could confirm that a Magstar 3590 model
E11 standalone
to
take advantage of FC attached drives. EG DDR does not know anything but
Ficon and Escon\
Best regards,
Pieter Harder
[EMAIL PROTECTED]
tel +31-73-6837133 / +31-6-47272537
Collinson.Shannon [EMAIL PROTECTED] 01/15/08 9:23
This is cross-posted on the linux-390 and ibmvm listservs
Can someone point me to where the qeth setup return codes are
documented? I've found a few mentioned in scattered postings online,
but none that represent our error (0xf6). I figure knowing where TFM is
will help me R it... Thanks!
For those interested, we're getting the following when
herein. If you have received this
message in error, please advise the sender immediately by reply e-mail
and delete this message. Thank you for your cooperation.
-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
Collinson.Shannon
Sent: Monday, February 04
I was trying to do this with a new guest by putting the following
directories in LVM (group system with one volume-204):
/usr
/var
/opt
/home
The rest of the root filesystem (including /boot) is on a regular
physical volume (200), and the
],
Collinson.Shannon [EMAIL PROTECTED] wrote:
Yeah, but we only have one disk in the LVM. And while we want
striping
So, does that mean you did put a partition on the volume? It's not
really clear from your answer.
in place (because we'll be using mod54s with PAV in the near future),
at
To be able
We don't have mod54's yet--this is just a mod9, and we'll have static
PAV aliases defined for the mod54s--but I assume the same logic applies.
Does this mean that I can't use LVM for the four mountpoints (/usr,
/var, /opt and /home) at all until we have PAV set up? I could define
minidisks on
/( )\
-^^-^^
In theory, theory and practice are the same, but
in practice, theory and practice are different.
On 3/14/08 8:15 AM, Collinson.Shannon [EMAIL PROTECTED]
wrote:
We don't have mod54's yet--this is just a mod9, and we'll have static
PAV aliases
And is this a bad idea? In the USS world at our shop, we've had our
/tmp directory mounted as a temporary file system (backed in memory) for
a decade with no problems, but we don't run all that much in USS. I
know that it's possible to mount tmp as memory-it's mentioned in a few
read-only-root
Thanks for all the explanation, folks--after reading through them, I'm
definitely sticking with physical data for /tmp, as it seems a lot
safer. And while we're not crunched for memory yet, I'm sure that day
will come...
Also, Mark, thanks for the link to the funtoo site for tmpfs info--if
I'd
I had this same problem and noticed that an invalid character was being
appended to the uploaded swpg0803 file (on the last line--3589). I just
deleted that line and everything ran fine...
Shannon Collinson--SunTrust Bank
-Original Message-
From: Linux on 390 Port [mailto:[EMAIL
I'm new to supporting linux, being a mainframe z/OS sysprog, so this may
just be a user error and I sincerely hope someone can say Duh! once I
explain this...
We're trying to build Linux-on-zSeries SLES10SP2 guests as close as
possible to the same level of Linux guests on Intel servers. As
Ooh! That's it, sorta! I think the syntax has changed slightly, but I
needed just pam_tally.so in the common-account file, and now it resets
the bugger after a successful login as it's supposed to. It also tracks
up to the 10 bad passwords before it locks the user, and if they enter a
correct
Has anyone had any experience building WireShark on SLES10 SP2? I
noticed that there's actually a Novell-delivered rpm for SLES11, but I
can't see anything like that for my release of zlinux, so I tried to
build it myself from source. It seemed to go pretty easily once I'd
pulled down a few
Subject: Re: building wireshark on SLES10SP2?
On 4/7/2010 at 06:08 PM, Collinson.Shannon
shannon.collin...@suntrust.com
wrote:
Has anyone had any experience building WireShark on SLES10 SP2? I
noticed that there's actually a Novell-delivered rpm for SLES11, but I
can't see anything like
We're slowly converting our consoles over to OSA-ICC from
direct-attached consoles, and are hitting some snags with the z/VM
consoles. The z/OS consoles, and the z/OS SNA terminals, popped over
with no trouble but we can't seem to get the z/VM consoles working with
all the features. We've
In z/VM 5.4, we had two set of packs (a set consisting of a res pack with the
code on it + a spool + a page volume) that we'd alternate between as we brought
up different levels of maintenance. We'd IPL off of the res pack for a set
and it'd pop in the single spool and page volumes associated
on my RHEL boxes, it uses about a quarter of the available memory too (and we
have ours stripped down to only 2G RAM, so that doesn't leave much for the
rest). And we noticed the messages only come out if you log in through the
console--doesn't care about an ssh-login. I looked at all the
Sorry if this is a dumb question, but I'm a z/OS sysprog trying to keep z/VM
and zlinux running at our shop so most of my questions are probably dumb. But
we're trying to get a vendor tool (CA's Workload Automation agent) running on
our new RHEL6.2 linux-on-z servers, and we can't find the
Has anyone else run into this? We were running RHEL 6.2 on s390x (z/VM 6.2,
actually) with no problems, but the same kickstart modified to point to the
RHEL 6.4 ISO seems to ignore the clearpart --initlabel --all/zerombr lines.
We have a standard (small) zlinux build that was defined in the
, Collinson.Shannon
shannon.collin...@suntrust.com wrote:
Has anyone else run into this? We were running RHEL 6.2 on s390x (z/VM 6.2,
actually) with no problems, but the same kickstart modified to point to the
RHEL 6.4 ISO seems to ignore the clearpart --initlabel --all/zerombr lines.
We have
We're trying to research an issue with our kickstarting that cropped up in
RHEL6.4 (worked perfectly in RHEL6.2). RedHat support has asked us to wait
till the kickstart fails then issue CNTL+ALT+F1 to get into pdb mode so
they can debug the anaconda stuff, but since we boot in z/VM, the
I'm actually using both of those (RUNKS=1 and cmdline) already. And I think we
know where in the code there's a problem--the clearpart --initlabel
--all/zerombr is not getting run against all disks, just disks mentioned later
in the kickstart. In 6.2, we could use it to wipe out the header
back
for something else I was working on.
Sandra
From: Collinson.Shannon shannon.collin...@suntrust.com
To: LINUX-390@VM.MARIST.EDU
Date: 07/16/2013 04:19 PM
Subject:Re: Any idea how to get into RedHat PDB mode on a
kickstart through a TN3270 console in z/VM?
Sent
We're running RHEL6.2 and RHEL6.4 on our zlinux servers under z/VM 6.2 and so
far have been handling HA by just clustering servers--yeah, any transaction in
process to a server that's gone down would be whacked in mid-air, but anything
new would route to the cluster-buddy that was still up.
Subject: Re: any good recommendations for an HA tool on zSeries?
On Friday, 10/04/2013 at 11:04 EDT, Collinson.Shannon
shannon.collin...@suntrust.com wrote:
We're running RHEL6.2 and RHEL6.4 on our zlinux servers under z/VM 6.2
and so
far have been handling HA by just clustering servers--yeah, any
whole lpars or CECs out of service for planned maintenance
as well without much human intervention.
Hope that is helpful.
Marcy
-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
Collinson.Shannon
Sent: Friday, October 04, 2013 8:03 AM
To: LINUX-390
Has anyone tried to setup dumpconf to dumpreipl using a shared device? I was
thinking that with the vmcmd option, there might be a way to do it, and thus
only have one disk (or set of disks) tied up for possible dumps. Something
like the following commands:
LINK MAINT 999 999 MR
CPU ALL STOP
send you a word doc if you'd like.
Marcy
-Original Message-
From: Linux on 390 Port [mailto:LINUX-390@VM.MARIST.EDU] On Behalf Of
Collinson.Shannon
Sent: Tuesday, November 12, 2013 2:03 PM
To: LINUX-390@VM.MARIST.EDU
Subject: [LINUX-390] configuring the s390 dump tools to dump to a shared
We are too. According to my BigFix guy, he created a "group and site with
auto-membership based on the OS-name" that throttled the cpu the clients are
allowed to use (.5% in our case--looks like Harley's folk let them have a
little more) and restricted gathering the inventory to what we
We're attempting to upgrade all our RHEL6 servers to RHEL7 and have just hit a
huge snag that didn't come up in testing (due to our test servers being rebuilt
pretty often). All our company-used servers still have LDL formatted minidisks
on them, and the upgrade process, at least, doesn't like
FYI--we did this recently using FDRPASVM (Innovation Data's z/VM companion to
FDRPAS for z/OS), and man, it was quick, easy, and no downtime. Of course, we
were doing it alongside z/OS data on the same storage controllers, and we'd
already owned FDR products for years, so it wasn't a big
34 matches
Mail list logo