As Mark said, there are products that will do this on certain
flavours of *nix. However they have their own, erm,
peculiarities. The one that we used at my last job insisted
on using it's own version of the pam modules to keep
everything under control. I wasn't very happy about this
but the
On Thursday 22 April 2004 20:29, Jim Sibley wrote:
Neale, I guess I don't yet understand sysfs and all
that. I have a 3390 device, say 5700, at /dev/dasdb,
I would rather use the subchannel address. What would
I have to set up to issue the command:
mount -t reiserfs
Page size is 4K. Without also seeing what Linux thinks is happening, it
would be hard to say if your storage allocation is appropriate. Based on
the fact that you're doing no paging at all (during the reporting period), I
would say you're likely grossly over allocated. Very few things in Linux
Hi,
Thanks for all of the responses yesterday in regards to the 26 disks
question. We were working on this today, but unfortunately I updated the
zipl.conf incorrectly. On the dasd= parameter, I added a comma after
the last device range. This should have been a blank. The ZIPL was run,
and
The question I have is whether the read-only root file system you have is an
initrd, or real DASD. I would think it's an initrd. In which case, you
should be able to mount your real root file system, chroot to it, and fix
zipl.conf.
Mark Post
-Original Message-
From: Linux on 390 Port
Two new patches for execute-in-place (xip2fs) for Linux 2.6.5 have been
placed on the web site. http://linuxvm.org/Patches/
These would go on top of the April 2004 stream patches for Linux 2.6.5:
http://www10.software.ibm.com/developerworks/opensource/linux390/april2004_r
ecommended.shtml
Mark
Ken Vance wrote:
Is there a way of overriding the command line and inserting the correct
line? Even if we update the original file, we would still be stuck since
the zipl command writes it to the boot area. Do we need to bring up the
RAM image and rebuild the boot area?
You need to run the
For all those people that have asked about possible virtual storage tuning
knobs for Linux, a couple of Red Hat employees have put together a short
whitepaper on the subject that is (perhaps) specific to RHEL3.
http://people.redhat.com/nhorman/papers/rhel3_vm.pdf
Not being a performance person,
This looks interesting - it will be interesting to see if they are more
successful than crossover office and wine.
http://www.specopslabs.com
Lionel B. Dyck, Systems Software Lead
Kaiser Permanente Information Technology
25 N.
I am going to be installing 3 different zVM's on the same z990 for
3 different sets of engineer departments.
Shared DASD.
What are the implications of using same VOLID's for 440RES W01 W02 etc.
Does zVM detect duplicate VOLSERs (like z/OS) during IPL?
I'd prefer to change these VOL ID's and
Note that I'm cross-posting my response to [EMAIL PROTECTED];
that's a much better list for this subject than LINUX-390.
Dave MYERS wrote:
I am going to be installing 3 different zVM's on the same z990 for
3 different sets of engineer departments.
Shared DASD.
What are the implications of using
I know that linux is buffer happy. Exceptionally so. Just got the while VM
thing going, and that went smoothly enough. Now I'm trying to tune memory
so that linux paging is minimized but VM paging does not increase as a
result.
I have one guest with 1.3 GB real (for Webspher ND) and it tends to
Mark, do they have the same kind of thing for 2.6
kenel? I now bdflush and its parameters have been
reworked. And some of the other parameters have
changed or disappeared as well.
=
Jim Sibley
RHCT, Implementor of Linux on zSeries
Computer are useless.They can only give answers. Pablo
Cross-posted on NEUVM-L, VMESA-L and LINUX-390 - feel free to forward
as appropriate.
Greetings!
NEUVM will be holding it's Spring Meeting on May 18, 2004 at Lombardo's
in Randolph, Massachusetts.
For meeting details and online registration click here:
http://www.neuvm.org/meetings.php
Once
I have no idea. The only reason I knew this existed was because someone
mentioned it on the Taroon mailing list. I would suspect they don't have
anything ready to publish on 2.6 just yet, but you could try emailing the
authors to find out for sure.
Mark Post
-Original Message-
From:
James Melin wrote:
I have one guest with 1.3 GB real (for Webspher ND) and it tends to page
about 90-113 KB when doing a large application deploy but still indicating
150-200 MB free and 500 MB of buffers. I've heard the less is more
approach, but has anyone got Websphere Network Deployment
16 matches
Mail list logo