Greetings; (Posted to VMESA-L and VSE-L and LINUX-390)
- - Now in its sixth year! - - Includes VSE and linux/390!
I have set up a public service web page at
http://www.eskimo.com/~wix/vm/
for posting positions available and wanted for VM, VSE and linux/390.
Please visit the web
I've found a bypass that can isolate the problem...
After configured an LVM via EVMS, i reboot the systems
and the mount after IPL is not possibile...
It is possible only if I launch evms_activate after
the IPL even if the IPL launch it during
initialisation!
--- Patrick B. O'Brien [EMAIL
I also am reluctant to give QUICKDSP to virtual machines that might
consume a fair amount of resources. You'd let someone get in front of
the line only when you know he only has something small to do, i.e. it
will not increase your wait time too much. We know that SFS servers
sometimes can eat a
Richard,
that was exactly the problem, thank you very much indeed!
John.
Several months ago I worked with Gentoo for S390 and it used the 2.6.x =
kernel. I found out I had to issue this type of command to get the =
network interface active. This was for a qeth type device with device =
Where is some good documentation on setting this up? Currently I have real
devices defined for each guest for hipersockets, so I am losing 4 addresses per
guest. Is there another way of doing this? Which ways are there of doing this?
I have read documentation that seems to indicate that you
On Wed, Dec 01, 2004 at 03:38:28PM -0600, Little, Chris wrote:
We have the following guests set to QUICKDSP
REXECD (probably don't need it. we don't use rexec on the vm side)
If you don't use it, it DEFINITELY doesn't need QUICKDSP.
FTPSERVE (ditto? do, but rarely)
VMSERVR
VMSERVS
Operating systems reach out to IODFs which contain i/o definitions. One IODF
can define up to 65,536 devices. z/VM can have *WAY* more than 4 chpids - not
sure where this restriction in your environment is coming from. Could be that
the z/VM LPAR's IODF at your shop only has 4 CHPIDs defined.
Yeah somewhere our IODF is messed up i think, and we need to take a look at it.
It does not make sense that we can only have 127 devices. This would limit us
on how many of our guests could have hipersockets, and we want all of them to
have hipersockets, and we want to go well over 50, and 127
On Thursday, 12/02/2004 at 10:02 CET, Rob van der Heij [EMAIL PROTECTED]
wrote:
I also am reluctant to give QUICKDSP to virtual machines that might
consume a fair amount of resources. You'd let someone get in front of
the line only when you know he only has something small to do, i.e. it
will
sorry for any confusion - but there are limits on the am
From: Linux on 390 Port on behalf of Seader, Cameron
Sent: Thu 12/2/2004 10:36 AM
To: [EMAIL PROTECTED]
Subject: Re: 127 device limitation for hipersockets
Yeah somewhere our IODF is messed up i think,
Unless the limit has changed: Hipersockets: device limits: 3072 yielding 1024
usable hipersockets.
Must define CHPIDs as type IQD.
David
From: Linux on 390 Port on behalf of Seader, Cameron
Sent: Thu 12/2/2004 10:36 AM
To: [EMAIL PROTECTED]
Subject: Re: 127
OK - I see where you could have gone awry. Unless the limit has changed: you
may have 4 CHPIDs type IQD for hipersockets. You can still have up to 1024
usable hipersockets.
If CHPID FE is IQD and CHPID FD is IQD and both have usable hipersockets you
CANNOT directly connect devices on
FE to FD.
A few comments on QUICKDSP; some opinion, some fact.
I tend to look at QUICKDSP in three lights:
1. As Barton mentioned any guest/server that another
guest depends on. He once used the phrase 'anything that
is an extension of the operating system (VM)' should have
quickdsp. Network,
On Thursday, 12/02/2004 at 07:19 MST, Seader, Cameron
[EMAIL PROTECTED] wrote:
Where is some good documentation on setting this up? Currently I have
real
devices defined for each guest for hipersockets, so I am losing 4
addresses per
guest. Is there another way of doing this? Which ways are
Cameron,
I just set up real hipersockets myself recently. Initially, 256 devices on
chpid xx cuadd 0. I should be able to add another 256 on cuadd 1
eventually, and another 256 on cuadd 2, etc.
I dedicate 3 addresses per Linux guest, allocated sequentially (potential
for 85 servers over 255
Here is where the limitation is comeing from on the 127 devices. If you look at
ios577I which gives reference to
IOS577I IQD INITIALIZATION FAILED, COMPLETION TABLE FULL ¦ SET IQD
PARAMETERS FAILED ¦ FEATURE NOT INSTALLED
We are have the problem with the Completion
You might want to also consider using VSWITCH or hipersocket guest lans
that do not have these limits. We have nearly 600 Linux guests running
on 2 guest lans with no problems. We have several hundred VM and Linux
guests running on another and are pushing large amounts of data through them,
ouch. Something is wacked out somewhere. What version of z/OS is this? Can you
consider placing the z/VM guests in a
guest lan, where one of the guest lan members uses hipersocket to get over the
partition line to LDAP? And route 'em all?
Until you redesign or this problem is fixed consider
On Thursday, 12/02/2004 at 09:09 MST, Seader, Cameron
[EMAIL PROTECTED] wrote:
IBM has told us also that this limitation also applies to z/VM, This is
per
LPAR. so i am limited to 127 devices online at one time. How have people
overcome this limit, If i want to have hipersockets on all of my
I'd use a VSWITCH for the small packet traffic. The OSAs would internally
switch things between partitions. I'd use real hipersockets for the big packet
stuff to directly connect partitions/machines. That way you're not relying on a
single machine being up and acting as a router as well as
To cross the partition border from VM LPAR to z/OS LPAR you can use
hipersockets, which you are doing, or
OSA devices (they can be shared), or real CTCAs - different types of
chpids can be configured as CTCAs - and
you can get a bunch of CTCAs from one channel. If you are running into
I attended an IBM OS/390 Suse Linux Class. In this class I installed Suse9 by
writing 3 files to the O/S 390 Card Punch, IPL'ing it and then installing
Suse9.
Is this procedure the best and easiest way to go to 9.0? Would it be easier to
upgrade from 8 to 9?
TIA!
-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On
Behalf Of Alan Altmark
Sent: Thursday, December 02, 2004 10:21 AM
To: [EMAIL PROTECTED]
Subject: Re: 127 device limitation for hipersockets
On Thursday, 12/02/2004 at 09:09 MST, Seader, Cameron
[EMAIL
-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On
Behalf Of Seader, Cameron
Sent: Thursday, December 02, 2004 10:09 AM
To: [EMAIL PROTECTED]
Subject: Re: 127 device limitation for hipersockets
Here is where the limitation is comeing from on the 127
Martha,
What kind of architecture are you useing? are you on a z900 or z990, could you
give some more details on your setup and architecture? What software do you use
for your monitoring, what authentication scheme do you use? etc.
Thanks,
Cameron Seader
[EMAIL PROTECTED]
-Original
On Thursday, 12/02/2004 at 10:41 CST, McKown, John
[EMAIL PROTECTED] wrote:
FWIW - I don't have a Hipersocket defined (just missed it), but I do
have an OSA-Express defined. The OSA has 220 addresses defined on it. I
created the IOCDS on z/OS. The z/OS systems only have 4 addresses
defined as
Running: SLES9 for S/390 (31-bit) in two LPARs (no VM)
Output from uname:
-s, --kernel-nameLinux
-r, --kernel-release 2.6.5-7.97-s390
-v, --kernel-version #1 SMP Fri Jul 2 14:21:59 UTC 2004
-m, --machines390
-p, --processor s390
-i,
I am experiencing the same thing. Its like the guests is not being dispatched.
Very strange.
-Cameron
-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] Behalf Of
Brad Johnson
Sent: Wednesday, December 01, 2004 08:42
To: [EMAIL PROTECTED]
Subject: Linux Slowdown
Has
Hi Doug.
The device configuration of Linux 2.6.x. is all-new, don't try to use any
2.4.x related methods from documentation
or such ;).
Linux will automagically detect your tapes when you attach them (at
runtime or at boot - does'nt matter). The device
driver will automagically be loaded. You can
I see that others have already given you similar suggestions. Hopefully, my
real life example can help. In our case, we have both a z/900 and a z/990.
The z/900 is running the production systems including nearly 600 Linux guests,
3 production z/OS systems and numerous VM based service machines.
Is there some way to define virtual hipersockets without real addresses?
That is exactly what a TYPE HIPER guest LAN is.
What can we do? I can't setup a Guest lan, because i need all of my guests
to talk to z/OS
since we have an LDAP server we authenticat to over on that side.
Not true
I attended an IBM OS/390 Suse Linux Class. In this class I installed Suse9
by writing 3 files
to the O/S 390 Card Punch, IPL'ing it and then installing Suse9.
If you're running VM on your S/390 or zSeries, that's as easy as it gets.
Since there's no SuSE for OS/390, I'm guessing that's what you
Yeah you're right, it is zSeries under VM.
So you're saying...
Create another VM Lpar and install SLES9 from there?
-Original Message-
From: Linux on 390 Port [mailto:[EMAIL PROTECTED] Behalf Of David Boyes
Sent: Thursday, December 02, 2004 11:07 AM
To: [EMAIL PROTECTED]
Subject:
On Dec 2, 2004, at 2:44 PM, Patrick B. O'Brien wrote:
Yeah you're right, it is zSeries under VM.
So you're saying...
Create another VM Lpar and install SLES9 from there?
Um, if you're running different versions of VM, or one in 64-bit mode
and one in 31-bit mode, yeah.
If not, then, no, just
I really DO hate to bring this up, but
It really does solve a lot of mysteries to have a performance
monitor that collects your linux and VM data.
Right now, i'm looking at some SAP data, Linux on z/VM,
a big linux server logs off and the master processor utilization
sky rockets, then the i/o
On Thu, Dec 02, 2004 at 12:44:35PM -0800, Patrick B. O'Brien wrote:
Yeah you're right, it is zSeries under VM.
Good. That makes this really easy.
Create another VM Lpar and install SLES9 from there?
Nope. Create a new userid on your existing VM system, and install the
SLES9 system in the new
according to FCON :
MDISK cache read hit ration 98%
sounds good, doesn't it? but wait . . .
MDISK cache read hit rate10/s
and then even worse
Act size in XSTORE310,240kB
Act size in maint stor. 553,176kB
even worse.
Is MDISK cache just wasted on my system?
The default settings make MDC pretty greedy on a large system.
Assuming you talk about a 64-bit z/VM system with more than 2G memory,
I would vote for setting MDC in XSTORE to 0M 0M, and trim the amount
in real memory with either a maximum setting or a bias (e.g. 0.1).
--
Rob van der Heij
Barton,
You're absolutely right. Without data, all anyone can do is guess. I
suspect the problem a lot of people are facing is that they don't (yet) have
any sort of budget for a lot of the things they're trying out. I agree that
anyone that has enough money to bring z/VM in house, but doesn't
39 matches
Mail list logo