VM VSE linux/390 Employment Web Page

2004-04-01 Thread Dennis G. Wicks
Greetings; (Posted to VMESA-L and VSE-L and LINUX-390) - - Now in its sixth year! - - Includes VSE and linux/390! I have set up a public service web page at http://www.eskimo.com/~wix/vm/ for posting positions available and wanted for VM, VSE and linux/390. Please visit the web

Re: SuSE SP3 Kernel panic: VFS: Unable to mount root fs on 5e: 01 ( was Re: Myth of the 1K blocksize

2004-04-01 Thread Gene Walters
You hit it right on the head. Somehow tar was gone from this instance of Linux. It's funny, this is the one I use to clone, and I just went to another instance, found it and copied it back. Once tar was there, it worked great. I really appreciate your help Thanks Gene [EMAIL PROTECTED]

Re: Adding QDIO guest lan to server

2004-04-01 Thread Kern, Thomas
The Q NIC shows E04/E05/E06 and the original CHANDEV.CONF reads E03/E04/E05 because I changed from 3/4/5 to 4/5/6 to see if that would fix it and then I did the Q NIC to respond to Mark. Last night I took out the chandev= string and rebooted. The LCS driver failed to come up but the QETH driver

Xwindows for Windows

2004-04-01 Thread Gene Walters
I discovered something last week, and I don't know if most of you know about it or not so I thought I would share. I found a program called cygwin. The feature I like is that it lets you do Xwindows stuff from Linux back to your windows desktop(Ie yast2 etc.) Up til now I had been looking at

Re: Adding QDIO guest lan to server

2004-04-01 Thread Aria Bamdad
On Thu, 1 Apr 2004 09:01:06 -0500 Kern, Thomas said: Last night I took out the chandev= string and rebooted. The LCS driver failed to come up but the QETH driver worked fine. There does seem to be some incompatibility between LCS and QETH. I doubt IBM ever tried both in the same linux server. I

Re: SuSE SP3 Kernel panic: VFS: Unable to mount root fs on 5e: 01 ( was Re: Myth of the 1K blocksize

2004-04-01 Thread Daniel Jarboe
It's funny, this is the one I use to clone, and I just went to another instance, found it and copied it back. Once tar was there, it worked great. I really appreciate your help Glad you are back in business now, though the missing binary IS disconcerting. You should probably take inventory

Re: Adding QDIO guest lan to server

2004-04-01 Thread Post, Mark K
Did you remove the complete chandev= statement, or just remove the chandev= part of it? You need the rest of the statement, just not the chandev= prefix on it. Mark Post -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Kern, Thomas Sent: Thursday, April

Re: Adding QDIO guest lan to server

2004-04-01 Thread Kern, Thomas
I removed JUST the chandev= string preserving the rest of the line. The LCS code attempted to initialize but failed saying it could not find any LCS capable cards. /Thomas Kern /301-903-2211 -Original Message- From: Post, Mark K [mailto:[EMAIL PROTECTED] Sent: Thursday, April 01, 2004

Re: Adding QDIO guest lan to server

2004-04-01 Thread Geiger, Mike
Thomas, Been there, done that... just recently. Change the comma after the noauto to a semicolon(;). Michael A. Geiger Sr. Operating Systems Programmer CommerceQuest, Inc. 5481 W. Waters Ave. Tampa, FL 33634 Tel. 813.639.6516 -Original Message- From: Kern, Thomas [mailto:[EMAIL

Re: Adding QDIO guest lan to server

2004-04-01 Thread Kern, Thomas
After changing 'CHANDEV=noauto,' to 'CHANDEV=noauto;' this is what I get. cat /etc/chandev.conf chandev=noauto;lcs0,0x39eo,0x39e1,0,0,1,1 add_parms,0x10,0xe00,0xe02,portname:VMLAN00 qeth1,0xe00,0xe01,0xe02,0,0 add_parms,0x10,0xe04,0xe06,portname:VMLAN02 qeth2,0xe04,0xe05,0xe06,0,0 Console log on

Re: Myth of the 1K blocksize on eckd - revisited

2004-04-01 Thread Fargusson.Alan
I don't think that you get this advantage on eckd. My understanding is that Reiser uses the fact that SCSI and IDE disk use 512 byte sectors to get the small file space savings. In other words: it does not read/merge/write, it just writes 512 bytes at a time for small files, and 4K at a time

Re: Adding QDIO guest lan to server

2004-04-01 Thread Geiger, Mike
Thomas, Remove the 'chandev=. I think you want noauto;lcs0,0x39eo,0x39e1,0,0,1,1 Michael A. Geiger Sr. Operating Systems Programmer CommerceQuest, Inc. 5481 W. Waters Ave. Tampa, FL 33634 Tel. 813.639.6516 -Original Message- From: Kern, Thomas [mailto:[EMAIL PROTECTED] Sent:

s390 storage key inconsistency? [was Re: msync() behaviour broken for MS_ASYNC, revert patch?]

2004-04-01 Thread Stephen C. Tweedie
Hi, On Thu, 2004-04-01 at 17:19, Jamie Lokier wrote: Some documentation I'm looking at says MS_INVALIDATE updates the mapped page to contain the current contents of the file. 2.6.4 seems to do the reverse: update the file to contain the current content of the mapped page. man msync agrees

Re: Adding QDIO guest lan to server

2004-04-01 Thread Post, Mark K
Try this: noauto lcs0,0x39eo,0x39e1,0,0,1,1 qeth1,0xe00,0xe01,0xe02,0,0 add_parms,0x10,0xe00,0xe02,portname:VMLAN00 qeth2,0xe04,0xe05,0xe06,0,0 add_parms,0x10,0xe04,0xe06,portname:VMLAN02 Mark Post -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Kern,

Re: Adding QDIO guest lan to server

2004-04-01 Thread Kern, Thomas
All drivers fail on that one, Mark. This level of Liunx/LCS really wants the 'chandev='. This is the latest that allows at least the LCS driver to come up: chandev=noauto lcs0,0x39eo,0x39e1,0,0,1,1 add_parms,0x10,0xe00,0xe02,portname:VMLAN00 qeth1,0xe00,0xe01,0xe02,0,0

Sharing root mdisk with SLES 8.0

2004-04-01 Thread Carlos Romero-Martin
Hello all, I probe to sharing my mdisk root in read only with others server but when load Linux (IPL) I receive input output errors because the root dasd is not in read write mode. Have you any method or procedure to sharing between severals servers Thanks A+ -- Carlos ROMERO-MARTIN

Re: Sharing root mdisk with SLES 8.0

2004-04-01 Thread Post, Mark K
Carlos, I don't know of anyone who's really spent any time trying to do this. The benefit you would gain from it (saving ~15MB per instance) really isn't that big. Especially compared to the pain involved. Mark Post -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED]

Re: Sharing root mdisk with SLES 8.0

2004-04-01 Thread Phil Payne
I don't know of anyone who's really spent any time trying to do this. The benefit you would gain from it (saving ~15MB per instance) really isn't that big. Especially compared to the pain involved. Wouldn't one set of cache entires versus dozens or hundreds make a difference in a large

Recommended memory for a VM LPAR

2004-04-01 Thread James Melin
Because of the timing of a meeting with IBM and SuSE and our own maintenance schedule, we re-carved the memory on our z/800 to create a 2 gb linux lpar, a 1 gb linux lpar and my 768 mb twiddling LPAR. Then things happened and a schedule for IBM to come in and do a bunch of stuff (loan us an IFL,

new disk for linux

2004-04-01 Thread Scorch Burnet
I am trying to add another disk to one of my SLES8 Linux images on z/VM. I added the mdisk statement to the user direct, excuted the directxa, and logged onto the guest. after cms formatting the new mdisk, iI added the new disk address to /boot/zipl/parmfile and ran zipl. After rebooting, I

Re: new disk for linux

2004-04-01 Thread Hall, Ken (IDS ECCS)
I think you added the device to the wrong file. I usually add it to /etc/zipl.conf, then run zipl. I believe zipl copies zipl.conf to the /boot files (among other things). -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] Behalf Of Scorch Burnet Sent: Thursday,

Re: Recommended memory for a VM LPAR

2004-04-01 Thread Wolfe, Gordon W
We're running z/VM 4.3 and 37 penguins happlily on 2GB right now. We will upgrade to 6GB this weekend, though. Nature and nature's laws lay hid in night: God said, 'Let Newton Be!' and all was light. - Alexander Pope It did not last; the Devil howling 'Ho! Let Einstein Be!' restored the status

Re: Recommended memory for a VM LPAR

2004-04-01 Thread Rich Smrcina
As long as you're stingy on virtual machine memory allocations, gobs of penguins can be squeezed into 2GB. 'Gobs', of course being one of those scientific numeric expressions... :) On Thu, 2004-04-01 at 14:42, James Melin wrote: Because of the timing of a meeting with IBM and SuSE and our own

Re: new disk for linux

2004-04-01 Thread Post, Mark K
Umm, no, it does not. With SLES8, SUSE went to using an initrd. The HOWTO I referenced in another reply talks about how to update that. Mark Post -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Hall, Ken (IDS ECCS) Sent: Thursday, April 01, 2004 3:47

Re: new disk for linux

2004-04-01 Thread Post, Mark K
Take a look at this: http://linuxvm.org/Info/HOWTOs/mkinitrd-notes.html Mark Post -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Scorch Burnet Sent: Thursday, April 01, 2004 3:44 PM To: [EMAIL PROTECTED] Subject: new disk for linux I am trying to add

Re: Recommended memory for a VM LPAR

2004-04-01 Thread Marcy Cortes
Due to an addition error (not mine), I'm running about 25 penguins happily in 1.5G + 160Meg of xstor. It was supposed to have been .5G of xstor. That's the problem with sharing a box with z/OS - can't ever take it down! We'll probably have a lot more once these penguins fly north (and east) to a

Re: Recommended memory for a VM LPAR

2004-04-01 Thread Brian France
From what I've seen, and then some. Currently I have a 2gig lpar running z/VM 4.3, and 5 images. The biggest one is a WebSphere Commerece Business Edition V5.5 instance. At 03:42 PM 4/1/2004, you wrote: Because of the timing of a meeting with IBM and SuSE and our own maintenance schedule, we

Re: Recommended memory for a VM LPAR

2004-04-01 Thread James Melin
Yeah. All of it is defined to be cental at the moment. What I will probably recommend is that we combine the 3 GB and 2 GB LPARS into one and make the 768mb one extended stor and move level 1 swap into that. Marcy Cortes [EMAIL PROTECTED] ellsfargo.com

z/VM IFL special price for Systems/ASM

2004-04-01 Thread Thomas David Rivers
Just a quick note that our Systems/ASM assembler (DASM) is now available under z/VM with IFL engine prices. For z/VM RACF management, you need an assembler to do RACF updates... which is a special bid from IBM. We can help reduce your costs by substituting Systems/ASM. We have had some

Re: new disk for linux

2004-04-01 Thread Hall, Ken (IDS ECCS)
Uh, yes it does. Whether or not an initrd is used, the DASD parm comes from the boot parameter file created by zipl. That's on the disk you boot from, which may or may not also contain the root filesystem. (In my case it does.) I just changed /etc/zipl.conf on my test system, ran zipl, and

Enabling 3270 Consoles on Linux/390 in an LPAR

2004-04-01 Thread Post, Mark K
In response to an off-list inquiry as to how to enable TN3270 connections to Linux/390 running in an LPAR, I prepared this sequence of steps. The setup at the site used 2074 equipment to allow this sort of access for their other mainframe operating systems, and they were tired of having to use the

Re: Recommended memory for a VM LPAR

2004-04-01 Thread McKown, John
-Original Message- From: Rich Smrcina [mailto:[EMAIL PROTECTED] Sent: Thursday, April 01, 2004 2:52 PM To: [EMAIL PROTECTED] Subject: Re: Recommended memory for a VM LPAR As long as you're stingy on virtual machine memory allocations, gobs of penguins can be squeezed into 2GB.

Re: Recommended memory for a VM LPAR

2004-04-01 Thread Fargusson.Alan
I always thought that an oodle was a gob of gobs. -Original Message- From: McKown, John [mailto:[EMAIL PROTECTED] Sent: Thursday, April 01, 2004 1:38 PM To: [EMAIL PROTECTED] Subject: Re: Recommended memory for a VM LPAR -Original Message- From: Rich Smrcina [mailto:[EMAIL

Re: Recommended memory for a VM LPAR

2004-04-01 Thread Dennis Wicks
It depends on which side of the pond you are on! Fargusson.Alan [EMAIL PROTECTED] To: [EMAIL PROTECTED] tb.ca.gov cc: Sent by: Linux on Subject: Re: Recommended memory for a VM

Re: Recommended memory for a VM LPAR

2004-04-01 Thread Rich Smrcina
Sort of like the relationship between a google and a googleplex. On Thu, 2004-04-01 at 15:58, Fargusson.Alan wrote: I always thought that an oodle was a gob of gobs. -Original Message- From: McKown, John [mailto:[EMAIL PROTECTED] Sent: Thursday, April 01, 2004 1:38 PM To: [EMAIL

Re: new disk for linux

2004-04-01 Thread Post, Mark K
On SLES8 systems (default, out of the box), the kernel modules and the script that invokes them are all in the initrd. At the time the DASD driver gets loaded by that script, there is no way for it to be able to access the contents of anything on disk. If you're not using an initrd, then of

Re: new disk for linux

2004-04-01 Thread Scorch Burnet
Thank you all for your responses. It appears that changing /etc/zipl.conf instead of /boot/zipl/parmfile was the correct way. Since I am out here by myself, with old redbooks, it is good to know taht there are people who are knowledgeable and willing to share their experiences. Thanks again to you

Re: Recommended memory for a VM LPAR

2004-04-01 Thread Marcy Cortes
The recommendations I was given when I asked a while back was at least 25% xstor, perhaps up to 50%. This is because of the heirarchical nature of the VM paging system. Marcy Cortes Wells Fargo Services Company -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On

Re: Sharing root mdisk with SLES 8.0

2004-04-01 Thread Vic Cross
On Thu, 1 Apr 2004, Phil Payne wrote: Wouldn't one set of cache entries versus dozens or hundreds make a difference in a large environment? I suppose, but for the root filesystem there's generally too much system-unique stuff in there. Keeping that stuff unique while making the filesystem

Re: Sharing root mdisk with SLES 8.0

2004-04-01 Thread Post, Mark K
Vic, 150MB? I said about 15MB. I think I can fit a whole system into 150MB. :) Mark Post -Original Message- From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of Vic Cross Sent: Thursday, April 01, 2004 7:05 PM To: [EMAIL PROTECTED] Subject: Re: Sharing root mdisk with SLES

Re: Sharing root mdisk with SLES 8.0

2004-04-01 Thread Vic Cross
On Thu, 1 Apr 2004, Post, Mark K wrote: 150MB? I said about 15MB. I think I can fit a whole system into 150MB. :) Yep, I saw that in the ohnoseconds after sending my reply. My point got an order of magnitude stronger, though! ;) Cheers, Vic