Re: Linux article

2009-10-06 Thread Kris Van Hees
On Tue, Oct 06, 2009 at 09:50:14PM -0400, Henry E Schaffer wrote:
 David writes:
  ...
  1) They're counting only RHEL and SLES. Doesn't count Ubuntu, which
  seems to power most of the netbooks  ...

   FWIW, I recently bought an ASUS eee PC900 and it runs Debian.  I think
 they've sold a lot of netbooks.

Actually, the ASUS EeePC versions with Linux on them come with Xandros, which
is a derivative of Debian.  The distrinction is rather important because they
run on two completely different release schedules.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Question regarding Oracle Licensing

2009-08-11 Thread Kris Van Hees
I would recommend talking to an Oracle representative directly about this
because there tends to float a whole lot of information around (sometimes
even by techs) that may or may not be entirely accurate.  Best bet is to
go to the source.

Kris

On Tue, Aug 11, 2009 at 04:16:42PM -0500, James Peddycord wrote:
 I was on a conference call with a couple of Oracle techs who said that
 Oracle is licensed on a per core basis, so that each IFL on a z/10 would
 count as 4 full price Oracle licenses. Has anyone else had experience with
 this? Is this correct?

 Thanks,
 Jim P.

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Oracle, zlinux, and virtualization

2008-12-02 Thread Kris Van Hees
I just checked with some people at Oracle, and that statement applies to x86
and x86-64 hardware platforms *only*.  This has nothing to do with zlinux.
Oracle fully supports and continues to support zlinux.

Kris (wearing my Oracle hat)

 -Original Message-
 From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
 O'Brien, Dennis L
 Sent: Tuesday, December 02, 2008 4:00 PM
 To: LINUX-390@VM.MARIST.EDU
 Subject:

 We have been informed by an Oracle rep that Oracle does not certify its
 programs on any Virtualization Software (to include VMware and zVM.)
 and that Oracle Support can't assist the customer until the
 virtualization software is removed and the problem is duplicated.

 This strikes me as odd, considering that Oracle is encouraging their
 z/OS customers to move to mainframe Linux.  Does Oracle expect them to
 run Linux in an LPAR, or run unsupported under z/VM?  Or is this guy all
 wet?

Dennis O'Brien

 We have awakened a sleeping giant, and we have instilled in him a
 terrible resolve.  -- Admiral Yamamoto, following the attack on Pearl
 Harbor

 --
 For LINUX-390 subscribe / signoff / archive access instructions, send
 email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
 visit http://www.marist.edu/htbin/wlvindex?LINUX-390

 --
 For LINUX-390 subscribe / signoff / archive access instructions, send
 email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
 visit http://www.marist.edu/htbin/wlvindex?LINUX-390



 LEGAL DISCLAIMER
 The information transmitted is intended solely for the individual or entity 
 to which it is addressed and may contain confidential and/or privileged 
 material. Any review, retransmission, dissemination or other use of or taking 
 action in reliance upon this information by persons or entities other than 
 the intended recipient is prohibited. If you have received this email in 
 error please contact the sender and delete the material from any computer.

 SunTrust is a federally registered service mark of SunTrust Banks, Inc. Live 
 solid. Bank solid. is a service mark of SunTrust Banks, Inc.
 [ST:XCL]





 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390




 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: your mail

2008-12-02 Thread Kris Van Hees
On Tue, Dec 02, 2008 at 10:06:25PM +, Alan Cox wrote:
 On Tue, 2 Dec 2008 12:59:55 -0800
 O'Brien, Dennis L Dennis.L.O'[EMAIL PROTECTED] wrote:

  We have been informed by an Oracle rep that Oracle does not certify its
  programs on any Virtualization Software (to include VMware and zVM.)

 http://www.computerworld.com/action/article.do?command=viewArticleBasicarticleId=9049038intsrc=news_ts_head

 Oracle require you run their own virtualisation software.

Again, that applies to x86 and x86-64 hardware platforms only.  It has nothing
to do with zlinux.

Kris

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: z/VM CP commands (LOGON)?

2008-08-21 Thread Kris Van Hees
Check out the AUTOLOG and XAUTOLOG commands.

Kris

On Thu, Aug 21, 2008 at 03:10:56PM -0400, Tom Burkholder wrote:
 z/VM'ers,

 After a few searches and actually scanning the CP commands and utilities 
 manual, hopefully I can get some help with this z/VM related question.  If I 
 have a z/linux guest called ztrash01, I can logon to z/VM as ztrash01, IPL 
 xxx, logon to Linux and shutdown -h now, and if I'm still logged on as 
 console from my 3270 terminal session, enter logoff.  No problem.

 The why I want to do the following is to potentially stress test some 
 applications by forcing off a z/Linux guest (crash) and then eventually 
 trying to automate and re-IPL, but first I gotta be able to do some basic 
 z/VM CP commands below.

 I read the LOGON in the CP commands and Utilities reference, but I'm doing 
 something wrong (other than trying to trash and stress test my guest).

 I'm still playing with test systems, but from a z/VM CP perspective, for now 
 the Linux guest ztrash01 is shutdown and halted, but ztrash01 is still 
 logged onto z/VM.

 1. If I'm logged on as operator to z/VM, I can issue CP command q n and 
 see that guest ztrash01 is logged onto z/VM and DSC (disconnected).
 2. Still as operator, I can issued CP command force ztrash01 logoff immed 
 and this logs the guest, ztrash01, off of z/VM (FORCED BY OPERATOR)
 3. Still as operator, a q n verifies that the guest is gone (i.e. logged 
 off z/VM)
 4.  Now, as operator, instead of going to another z/VM terminal session and 
 logging on as ztrash01, still as operator, I would like to cause the guest 
 ztrash01 to logon (and eventually IPL).

 To cause the IPL of guest ztrash01 at LOGON, I believe I can put that in 
 the PROFILE EXEC (e.g. IPL xxx) at the end, but how (if at all) can I cause 
 the guest to be logged on (opposite of FORCE) while being logged on to z/VM 
 as operator?

 Thanks in advance,
 Tom B.



 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: FW: [Engineers] Tru64 filesystem

2008-06-27 Thread Kris Van Hees
On Fri, Jun 27, 2008 at 02:01:17PM -0400, David Boyes wrote:
  According to our kernel developers, releasing AdvFS  to the open
 source
  community was for the purpose of using it as a research tool to
 improve
  existing Linux file systems.  Actually porting it to Linux would be a
  significant amount of work, ala XFS.
 
  Now, if you were to sic Neale on it, and throw some food and beer/wine
  into the cave once in a while, we all know miracles are entirely
 feasible.

 It's a great cluster filesystem as well as being designed for
 manageability.

Just don't try to run something like a usenet news spool on it.  It's been
tried and the results weren't pretty.  Then again, inn is probably one of
the worst usage cases for AdvFS.

Kris

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Best Practices for zSeries linux ISVs?

2006-12-01 Thread Kris Van Hees
On Fri, Dec 01, 2006 at 08:53:49PM -0500, David Boyes wrote:
  Based on feedback, we can really decide to ship the package formats
 that
  users seem to want/need,
  but we would *really* like to keep the binary packages to a small
 number.
  We would like to be able to use a single zLinux image to build binary
  packages.

 While you *could* do that, I think you'll find that you'll need at least
 one RH and one SuSE guest, and you'll need to build the packages
 independently on both. There are still some small and subtle differences
 in the way RPMs are built on the different distributions, and you'll
 need the separate guests to test the installs anyway.

Actually, more than small and subtle...  Dependencies for the binary RPMs are
in part derived from the build environment, so creating a SuSE RPM on a RedHat
system, or the other way around, is never a good idea.  It can be made to work
with the right amount of dark magic, but it isn't for the faint of heart, and
it is bound to get you into a lot of misery down the road.

Cheers,
Kris

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Command to query the MTU along the way

2006-09-22 Thread Kris Van Hees
On Fri, Sep 22, 2006 at 04:18:45PM -0500, Marcy Cortes wrote:
 There is a linux command that for the life of me I can't remember to
 query MTU settings along the way to your destination.

tracepath?  I do seem to remember something about it not always being possible
to get MTU settings along the entire path, but that is the nest utility I know
that has a chance of giving you what you want.

Kris

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: I/O wait times -vs- Linux memory

2006-08-17 Thread Kris Van Hees
Generally, adding extra memory won't improve I/O throughput (reduce I/O wait
times) unless the application characteristics are such that you get decent
benefits from readahead and caching to avoid buffer recycling trashing data
you are going to need soon.  For database applications (where feasible) it
is obviously an advantage if you can fit the working data set in core memory,
utilizing it as a large write-through cache.

So...  given that you are talking about a database here, I'd recommend someone
looking a bit at the database runtime profiling data (if your engine has any)
to see if it is indeed reading data more often than it ought to.  Most DB
engines like to manage their own data buffers rather than depending on OS-level
IO buffers (and sometimes even use direct IO operations to avoid the OS messing
with buffering), and so extra memory will only help if you configure the DB
engine to use the extra memory.

Once satisfied that the DB engine is configured for optimal performance, tuning
the IO system may be needed as wel.

Kris

On Thu, Aug 17, 2006 at 10:23:13AM -0400, David Boyes wrote:
  We're looking at some high I/O wait times for a certain database.  One
  engineer has suggested that we add memory to Linux to speed things up.
 I
  don't see it.  Does adding RAM to Linux help its I/O throughput?

 On Intel, maybe. I would doubt that it would have a large impact on Z.

 Can you tell how much space the QDIO buffer allocation is taking up?
 Increasing that allocation may be worth considering, since the SCSI code
 takes advantage of QDIO if possible.  What does the SAN fabric report on
 link utilization, and is the link configured for multipathing?

 David Boyes
 Sine Nomine Associates

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: I/O wait times -vs- Linux memory

2006-08-17 Thread Kris Van Hees
On Thu, Aug 17, 2006 at 10:57:46AM -0400, David Boyes wrote:
 The way QDIO device handling works, the size of the transfer buffers
 affect the amount of information that can be moved with a single
 operation, which sets a maximum limit on the transfer rate for a fixed
 processor speed.

 Analagous to DMA buffer size; not main storage allocation, but managing
 link congestion by controlling transmit/receive buffers and window size.

Well, yes, but my point is that (especially for database aplications) often
the problem with I/O wait times is related to engine tuning rather than I/O
channel speed.  Essentially, if the database is optimally tuned and  has  a
working set that can fit in core memory, the I/O channel speed becomes less
of an issue because less real I/O will be needed once a  stable  state  has
been reached.  It's one of the oldest (and  most  influencial)  performance
tuning tricks in the database world ;)

Most databases that are dependent on I/O speed are bound to perform  slowly
because I/O is, well, slow :)

Kris

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: starting apache

2006-07-10 Thread Kris Van Hees
Many problems with running scripts like this through sudo can often be solved
by using the -i option to sudo, to force the script to run in a like-login
environment for the user sudo is going to execute as (root in this case).

Kris

On Mon, Jul 10, 2006 at 02:18:13PM -0400, David Boyes wrote:
  You made me double check, and I found I was indeed right...

 I was sure you would...8-)

  [EMAIL PROTECTED]:~ sudo /etc/init.d/apache restart
  Shutting down httpd/etc/init.d/apache: line 158: killproc: command not
  found
 
  failed
  Starting httpd [ Mailman PERL PHP4 Python ]/etc/init.d/apache: line
  121: startproc: command not found
 
 done

 I'd argue that these are bugs that should be fixed, not ignored. The
 Debian init scripts function properly using sudo because they're
 required to. Apparently the SuSE ones still need a little work.

  And even if it worked, these shell scripts are not robust enough to
  run under sudo.

 Glad you agree. Again, these problems should be fixed, not ignored.

  Frequently they allow environment variables to
  override essential things and they source configuration files that you
  may not all protect.

 Your system, your gun, your foot. Blind operational practices will lose.
 It's still your responsibility not to do something dumb, which includes
 letting ordinary users run code as root that you haven't looked at.

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Google out of capacity?

2006-05-04 Thread Kris Van Hees
On Fri, May 05, 2006 at 01:53:48PM +1000, Vic Cross wrote:
 On 05/05/2006, at 5:53am, Fargusson.Alan wrote:
 A long time ago I read that they did TCO studies, and found it less
 costly to buy lots of low cost hardware over buying fewer high cost
 systems.

 A long time ago is the point.  When I read similar, the server
 count was around 8000 -- it would seem that they've grown
 considerably beyond that now.  I doubt they've updated their TCO
 analysis accordingly...  :)

But on the other hand, most TCO studies also do not take upgradability fully
into account, although that becomes an important factor when dealing with the
low-end PC-based hardware.  More so than with zSeries boxes.

In the end, consolidation may look like a better option in a TCO study, but the
value of the TCO study is largely limited by how well the options you are
comparing are actually representing solutions to the same problem.

E.g. if you take the typical TCO study used when comparing against zSeries
based solutions, you tend to neglect the cost differences incurred when you
need to add more capacity rapidly.  Very easy (and low cost) when doing it with
a large fleet of low-end machine.  More difficult (and higher cost) when you
need to do that with zSeries boxes.

As said before: Right tool for the job is an important factor.  I have yet to
see a good argument for using zSeries boxes for something similar to the Google
search functionality (some of their other stuff could benefit, probably).

Kris

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Xen

2006-02-16 Thread Kris Van Hees
This is actually incorrect.  There is no kernel sharing taking place in Xen.
I'd recommend anyone interested in Xen to check out their documentation and
technical papers. It's actually a lot closer to the hypervisor-approach for
virtual machines than one might expect at first glance.

It's basically a small, specialized hypervisor-alike 'kernel' running on the
bare hardware, providing services to guest instances.  There is always a first
guest running in the unrestricted domain (dom0), and additional guests can
either be in the unrestricted domain e.g. if they need direct access to some
hardware, or in the user domain (domU).  Each guests runs its own Linux kernel,
and is in fact it own Linux installation.  The only requirement is that the
kernel is a patched version.  When you compile Xen, it create dom0 and domU
kernels for you to use in guests.

That's really a very summarized description though.  Again, I recommend looking
at the docs and tech papers if you want to know more.

Kris

On Thu, Feb 16, 2006 at 11:03:20AM -0600, Tom Shilson wrote:
 Xen is a Linux form of VMWare.  It allows you to run multiple instances of
 Linux.  Instead of creating a virtual machine, however, Xen shares the
 kernel.  Compared to VMWare (or zVM) it is limited because of this.  I have
 never used it.  I believe that it is an OpenSource project.

 tom

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Question for those C types out there

2005-06-13 Thread Kris Van Hees
You should have a config.log file that contains most of the output of the
configure command, along with a lot of extra stuff that is useful for looking
into issues with the configure step.

Issue the following command before doing the configure...

script

Then issue the configure and make, and then issue the command...

exit

You will end up with a file name 'typescript' that contains all the output of
the configure and make commands.

Kris

On Mon, Jun 13, 2005 at 03:12:37PM -0400, Post, Mark K wrote:
 No.  By default, all output goes to stdout, and stderr.  If you want
 that to wind up in a file, you need to redirect it.


 Mark Post

 -Original Message-
 From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
 Tom Duerbusch
 Sent: Monday, June 13, 2005 3:06 PM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: Question for those C types out there


 I would have been happy if, somehow, all these warning/error messages
 could be gathered and repeated at the end (like real compilers doG).

 I assume that all these messages also went to a file.  At this point, I
 don't know enought to know where that file is.  I was hoping for
 something over in '/var', but nothing from the './configure' or 'make'.

 Tom Duerbusch
 THD Consulting

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: CVS

2005-03-25 Thread Kris Van Hees
As an aside...  If you do not need the encryption (e.g. when accessing public
CVS repositories that might actually accept non-encrypted connections), just
turn it off so that you do not have to deal with the overhead.  By not having
ssh do any encryption, you'll avoid all the (slow) crypto stuff in software.

Kris

On Fri, Mar 25, 2005 at 02:43:03PM -0500, David Boyes wrote:
  So this is the kind of thing I could point at a crypto engine
  if I had one?

 If, if, if

 IF you had a crypto engine, and IF you had the OpenSSL package compiled
 with the IBM modifications to enable the Cryptoki crypto interface to
 the crypto engine, and IF you had OpenSSH recompiled to use the modified
 OpenSSL libraries, and IF all this didn't do harm to your support
 agreement, then you could probably get some benefit.

 Right now, all the ssh crypto is done in software, and CVS does a lot of
 connection setup and teardown which is where the asymmetric crypto (the
 really expensive part of the crypto exchange) gets done. Use of the
 routines in Cryptoki would help, but as you can see from above, it's not
 gonna be simple.

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: FW: SLES9 + OpenAFS Server

2005-03-24 Thread Kris Van Hees
On Thu, Mar 24, 2005 at 03:49:24PM -0500, Wiggins, Mark wrote:
 We bumped memory from 256M to 512M, should I keep going higher?

What version of OpenAFS are you trying?

Kris

 -Original Message-
 From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
 Mark D Pace
 Sent: Thursday, March 24, 2005 3:26 PM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: FW: SLES9 + OpenAFS Server


 I'm not running OpenAFS, but typically a segfault is an indication of
 not
 enough memory.  Bump your Linux memory size or swap file size.



 Mark D Pace
 Senior Systems Engineer
 Mainline Information Systems
 1700 Summit Lake Drive
 Tallahassee, FL. 32317
 Office: 850.219.5184
 Fax: 888.221.9862
 http://www.mainline.com



  Wiggins, Mark
  [EMAIL PROTECTED]
  nn.edu
 To
  Sent by: Linux on LINUX-390@VM.MARIST.EDU
  390 Port
 cc
  [EMAIL PROTECTED]
  IST.EDU
 Subject
FW: SLES9 + OpenAFS Server

  03/24/2005 03:22
  PM


  Please respond to
  Linux on 390 Port
  [EMAIL PROTECTED]
  IST.EDU






 I have installed the OpenAFS-server via YaST (and the dependency
 OpenAFS) on SLES9.  Using README.SuSE as a guide, I
 issue /usr/sbin/bosserver -noauth, which immediately segfaults.  Is
 anyone successfully running OpenAFS Server on SLES9/s390x?  I have
 included an strace below.  Any thoughts on how to correct this are much
 appreciated.

 execve(/usr/sbin/bosserver, [/usr/sbin/bosserver], [/* 50 vars */])
 = 0
 uname({sys=Linux, node=lnxzvm13, ...}) = 0
 brk(0)  = 0x80056000
 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
 = 0x201a000
 open(/etc/ld.so.preload, O_RDONLY)= -1 ENOENT (No such file or
 directory)
 open(/etc/ld.so.cache, O_RDONLY)  = 3
 fstat(3, {st_mode=S_IFREG|0644, st_size=23688, ...}) = 0
 mmap(NULL, 23688, PROT_READ, MAP_PRIVATE, 3, 0) = 0x201b000
 close(3)= 0
 open(/lib64/libresolv.so.2, O_RDONLY) = 3
 read(3, \177ELF\2\2\1\0\0\0\0\0\0\0\0\0\0\3\0\26\0\0\0\1\0\0\0...,
 640) = 640
 fstat(3, {st_mode=S_IFREG|0755, st_size=99784, ...}) = 0
 mmap(NULL, 101280, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =
 0x2021000
 madvise(0x2021000, 101280, MADV_SEQUENTIAL|0x1) = 0
 mmap(0x2035000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED,
 3, 0x13000) = 0x2035000
 mmap(0x2038000, 7072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|
 MAP_ANONYMOUS, -1, 0) = 0x2038000
 close(3)= 0
 open(/lib64/tls/libc.so.6, O_RDONLY)  = 3
 read(3, \177ELF\2\2\1\0\0\0\0\0\0\0\0\0\0\3\0\26\0\0\0\1\0\0\0...,
 640) = 640
 lseek(3, 624, SEEK_SET) = 624
 read(3, \0\0\0\4\0\0\0\20\0\0\0\1GNU\0\0\0\0\0\0\0\0\2\0\0\0\6..., 32)
 = 32
 fstat(3, {st_mode=S_IFREG|0755, st_size=1542343, ...}) = 0
 mmap(NULL, 1331472, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =
 0x203a000
 madvise(0x203a000, 1331472, MADV_SEQUENTIAL|0x1) = 0
 mmap(0x215f000, 118784, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED,
 3, 0x124000) = 0x215f000
 mmap(0x217c000, 12560, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|
 MAP_ANONYMOUS, -1, 0) = 0x217c000
 close(3)= 0
 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0)
 = 0x218
 munmap(0x201b000, 23688)= 0
 geteuid()   = 0
 brk(0)  = 0x80056000
 brk(0x80077000) = 0x80077000
 brk(0)  = 0x80077000
 mmap(NULL, 200704, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
 0) = 0x2181000
 --- SIGSEGV (Segmentation fault) @ 0 (0) ---
 +++ killed by SIGSEGV +++



 Matthew J. Smith
 University of Connecticut ITS
 This message sent at Thu Mar 24 14:52:29 2005
 PGP Key: http://web.uconn.edu/dotmatt/matt.asc

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or
 visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390


 This e-mail and files transmitted with it are confidential, and are
 intended solely for the use of the individual or entity to whom this
 e-mail
 is addressed.  If you are not the intended recipient, or the employee or
 agent responsible to deliver it to the intended recipient, you are
 hereby
 notified that any dissemination, distribution or copying of this
 communication is strictly prohibited.  If you are not one of the named
 recipient(s) or otherwise have reason to believe that you received this
 message in error, please immediately notify sender by e-mail, and
 destroy
 the original message.  Thank You.

 

Re: FW: SLES9 + OpenAFS Server

2005-03-24 Thread Kris Van Hees
On Thu, Mar 24, 2005 at 04:28:03PM -0500, Wiggins, Mark wrote:
 1.2.11-20.1

Unless SuSE did their own port (or backpor) for s390x, it won't work.  There
is no support for s390x in any stable release of OpenAFS right now.  The next
stable release is likely to have support for it (both for server and for
client), but it is certainly normal that 1.2.11 does not work.

Kris

 -Original Message-
 From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
 Kris Van Hees
 Sent: Thursday, March 24, 2005 3:51 PM
 To: LINUX-390@VM.MARIST.EDU
 Subject: Re: FW: SLES9 + OpenAFS Server


 On Thu, Mar 24, 2005 at 03:49:24PM -0500, Wiggins, Mark wrote:
  We bumped memory from 256M to 512M, should I keep going higher?

 What version of OpenAFS are you trying?

   Kris

  -Original Message-
  From: Linux on 390 Port [mailto:[EMAIL PROTECTED] On Behalf Of
  Mark D Pace
  Sent: Thursday, March 24, 2005 3:26 PM
  To: LINUX-390@VM.MARIST.EDU
  Subject: Re: FW: SLES9 + OpenAFS Server
 
 
  I'm not running OpenAFS, but typically a segfault is an indication of
  not
  enough memory.  Bump your Linux memory size or swap file size.
 
 
 
  Mark D Pace
  Senior Systems Engineer
  Mainline Information Systems
  1700 Summit Lake Drive
  Tallahassee, FL. 32317
  Office: 850.219.5184
  Fax: 888.221.9862
  http://www.mainline.com
 
 
 
   Wiggins, Mark
   [EMAIL PROTECTED]
   nn.edu
  To
   Sent by: Linux on LINUX-390@VM.MARIST.EDU
   390 Port
  cc
   [EMAIL PROTECTED]
   IST.EDU
  Subject
 FW: SLES9 + OpenAFS Server
 
   03/24/2005 03:22
   PM
 
 
   Please respond to
   Linux on 390 Port
   [EMAIL PROTECTED]
   IST.EDU
 
 
 
 
 
 
  I have installed the OpenAFS-server via YaST (and the dependency
  OpenAFS) on SLES9.  Using README.SuSE as a guide, I
  issue /usr/sbin/bosserver -noauth, which immediately segfaults.  Is
  anyone successfully running OpenAFS Server on SLES9/s390x?  I have
  included an strace below.  Any thoughts on how to correct this are
 much
  appreciated.
 
  execve(/usr/sbin/bosserver, [/usr/sbin/bosserver], [/* 50 vars
 */])
  = 0
  uname({sys=Linux, node=lnxzvm13, ...}) = 0
  brk(0)  = 0x80056000
  mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
 0)
  = 0x201a000
  open(/etc/ld.so.preload, O_RDONLY)= -1 ENOENT (No such file or
  directory)
  open(/etc/ld.so.cache, O_RDONLY)  = 3
  fstat(3, {st_mode=S_IFREG|0644, st_size=23688, ...}) = 0
  mmap(NULL, 23688, PROT_READ, MAP_PRIVATE, 3, 0) = 0x201b000
  close(3)= 0
  open(/lib64/libresolv.so.2, O_RDONLY) = 3
  read(3, \177ELF\2\2\1\0\0\0\0\0\0\0\0\0\0\3\0\26\0\0\0\1\0\0\0...,
  640) = 640
  fstat(3, {st_mode=S_IFREG|0755, st_size=99784, ...}) = 0
  mmap(NULL, 101280, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =
  0x2021000
  madvise(0x2021000, 101280, MADV_SEQUENTIAL|0x1) = 0
  mmap(0x2035000, 12288, PROT_READ|PROT_WRITE,
 MAP_PRIVATE|MAP_FIXED,
  3, 0x13000) = 0x2035000
  mmap(0x2038000, 7072, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|
  MAP_ANONYMOUS, -1, 0) = 0x2038000
  close(3)= 0
  open(/lib64/tls/libc.so.6, O_RDONLY)  = 3
  read(3, \177ELF\2\2\1\0\0\0\0\0\0\0\0\0\0\3\0\26\0\0\0\1\0\0\0...,
  640) = 640
  lseek(3, 624, SEEK_SET) = 624
  read(3, \0\0\0\4\0\0\0\20\0\0\0\1GNU\0\0\0\0\0\0\0\0\2\0\0\0\6...,
 32)
  = 32
  fstat(3, {st_mode=S_IFREG|0755, st_size=1542343, ...}) = 0
  mmap(NULL, 1331472, PROT_READ|PROT_EXEC, MAP_PRIVATE, 3, 0) =
  0x203a000
  madvise(0x203a000, 1331472, MADV_SEQUENTIAL|0x1) = 0
  mmap(0x215f000, 118784, PROT_READ|PROT_WRITE,
 MAP_PRIVATE|MAP_FIXED,
  3, 0x124000) = 0x215f000
  mmap(0x217c000, 12560, PROT_READ|PROT_WRITE,
 MAP_PRIVATE|MAP_FIXED|
  MAP_ANONYMOUS, -1, 0) = 0x217c000
  close(3)= 0
  mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1,
 0)
  = 0x218
  munmap(0x201b000, 23688)= 0
  geteuid()   = 0
  brk(0)  = 0x80056000
  brk(0x80077000) = 0x80077000
  brk(0)  = 0x80077000
  mmap(NULL, 200704, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS,
 -1,
  0) = 0x2181000
  --- SIGSEGV (Segmentation fault) @ 0 (0) ---
  +++ killed by SIGSEGV +++
 
 
 
  Matthew J. Smith
  University of Connecticut ITS
  This message sent at Thu Mar 24 14:52:29 2005
  PGP Key: http://web.uconn.edu/dotmatt/matt.asc
 
  --
  For LINUX-390 subscribe / signoff / archive access instructions,
  send email to [EMAIL PROTECTED] with the message: INFO LINUX-390
 or
  visit

Re: IPTables

2004-10-05 Thread Kris Van Hees
I am getting into this discussion a bit late (been out of the country for a
while, etc) but I wonder about the following:

X - A (159.166.1.69) - B (159.166.4.137)

X - C (159.166.1.7)  - B (159.166.4.137)

If in this scenario, A and C are forwarding traffic on specific ports to B,
then B would see either A or C as the *source* IP address, and thus it would
send reply packets to the appropriate IP address (again, A or C, depending
on where the traffic came from).  A and C should then, using connection
tracking and/or explicit NAT in reverse direction, send the replies back to
X, coming from A or C depending on who is passing the packets for that case.

So, the scenario would split up as:

X - A (159.166.1.69)

A (159.166.1.69) - B (159.166.4.137)

A (159.166.1.69) - B (159.166.4.137)

X - A (159.166.1.69)

-
X - C (159.166.1.7)

C (159.166.1.7) - B (159.166.4.137)

C (159.166.1.7) - B (159.166.4.137)

X - C (159.166.1.7)

Would that be the mechanism you are looking for?  In this, B would only see
traffic coming from A and/or C, and respond back to A and/or C.  A and C would
be responsible for doing the correct address translation to pass things back
and forth transparently.

Kris

On Tue, Oct 05, 2004 at 10:25:07AM -0400, Bob wrote:
 now I am beginning to understand this a little better. I actually have 2
 of these setups

- A (159.166.1.69) - B (159.166.4.137)
 X
- C (159.166.1.7)  - B (159.166.4.137)

 You can put the address of either A or C and the packet is forwarded over
 to B to be processed.

 Right now if you put in A's address, the packet will be sent to B and but
 since it is coming from X (which is a totally different IP address) the
 packet will end up on the default route and go back to A and that will
 work fine, but, if you use C's address, the packet gets sent to B and
 since X is outside, B send the packet to the default route of A.

 What I need B to do is know if the packet came from thru A to send it back
 to A and if it came thru C to send it back to C



 On Mon, 4 Oct 2004 17:24:01 +0200, Peter Oberparleiter
 [EMAIL PROTECTED] wrote:
  I'll assume that you're trying to implement this scenario:
 
  X - A (159.166.1.69) - B (159.166.4.137)
 
 
  $IPTABLES -t nat -A POSTROUTING -p tcp --destination 159.166.4.137 \
--dport 8994 -j SNAT --to 159.166.1.69
 

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: J2EE performance?

2004-06-22 Thread Kris Van Hees
Current development tends to follow the following sequence:

- Rapid software development
- Lesser quality code, with less efficient use of resources
- Higher resource demands
- Higher minimum requirements for the application

Past development tended to create applications that would primarily work within
the contraints of the systems people had, because requiring more would usually
cause the application not to be purchased at all.  Now that the games industry
along with one of the primary OS companies have been pushing the limits ever
forward (to the great satisfaction of the PC component manufacturers who can
discontinue parts at a never before seen rate - conspiracy theory buffs can go
look for cross-industry deals and market manipulation - not my cup of tea), no
application developer has to worry about limits anymore.  Just put on the box
that you need 3.0GHz CPU, 1GB RAM, 40GB HD and a DVD burner (as minimum reqs)
and a large part of the targetted user base will go out and upgrade their
machine to run the application (assuming they want it).

Does it make sense?  No.  Does it keep a very large industry segment alive?
Most definitely!  Is it any good?  Not in my opinion, but YMMV.

Kris

On Tue, Jun 22, 2004 at 09:10:02AM -0500, David Booher wrote:
 I may be old school, but there's no substitute for well written programs that are 
 both efficient in CPU and storage and the same goes for the software platform they 
 run on.  I even get discouraged at home when you have to buy new hardware to support 
 the bloating of the OS it runs on. What are you achieving? New functionality?  
 Better programs?  More stability?  Some of the new software I've bought to run on my 
 PC is re-written old stuff with more advertisement and fancy programmatic gizmos.  
 It's neither more efficient nor better performing, even on new hardware.

 The new school must have deep pockets.   ;)

 My opinions only, folks!

 Dave


 -Original Message-
 From: Linux on 390 Port [mailto:[EMAIL PROTECTED] Behalf Of
 Barton Robinson
 Sent: Tuesday, June 22, 2004 8:21 AM
 To: [EMAIL PROTECTED]
 Subject: Re: J2EE performance?


 The old school that thinks 80 mips is a lot is used to
 really well written programs, written in assembler to
 be efficient in both CPU and storage.  The new school
 that uses Java and C++ has different objectives.

 An 80 MIP processor is about a 300MHz pentium. This is
 based on Barton's Number of 4, where 1 mip is about
 4 Mhz of Intel running equivalent code.  Not a really
 impressive machine, unless it is running many workloads
 at a very high utilization with lots of I/O 7 x 24

 I've heard the new java compilers are much much better,
 suited more for meeting mainframe objectives.

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: NFS Behaviour Question

2004-04-15 Thread Kris Van Hees
Typically, NFS will not cross mount points.  Some NFS daemons will let you
do that by specifying specific options (don't remember offhand which), or
otherwise you could just export all the mounted images one by one (more work
of course).

Kris

On Thu, Apr 15, 2004 at 05:28:49PM -0500, James Melin wrote:
 Fired up NFS and got it working... but I am seeing something that doesn't
 make sense to me.

 We're installing piles of trial software from IBM, many that necessitate
 copying images of CD media to local file systems. That takes a while with a
 100 mbit ethernet card. SO what I did instead was use NFS.

 It works fine with files that are the local file system but not with files
 that are CD images mounted using the loopback device.

 Basically copied cd to disk via dd :  dd  if=/dev/cdrom
 of=/images/file_name_of_the_moment.iso

 and the mounted it via the loopback driver

 mount -o loop,ro /images/file_name_of_the_moment.iso /cdimage

 The file  names of course have been changed to protect my sanity.


 When I mount the NFS directory I shared and on which I mounted these CD's
 using the loopback thingy I only see the mount points, not what is mounted
 on them.

 /etc/exports looks like

 /images itasca(ro) calhoun(ro) pepin(ro) phalen(ro) nokomis(ro)
 rockhopper(ro) pequot(ro)

 and if mount the iso image elswhere and to the whole tar -clpSf thing into
 the actual mountpoint, the files show up via NFS. Just not the mounted ISO
 image.  Why is that?

 --
 For LINUX-390 subscribe / signoff / archive access instructions,
 send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
 http://www.marist.edu/htbin/wlvindex?LINUX-390

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: UML or Linux on z/VM

2003-09-25 Thread Kris Van Hees
I have not done an extensive comparison between using Linux on ZSeries and
using UML, but from the experiments I have done with e.g. AFS, one thing
stands out very clearly: Linux on zSeries (under VM) provides you with a whole
range of resource management tools that are not available under UML.  In fact,
you do not have much control over resources when using UML, other than the
standard Unix process priority/nice values for scheduling.  Nothing that comes
even close to the extensive facilities VM provides.

The lack of any decent resource management for virtual instances using UML is
the single most problem I see with using UML for consolidation.  Nor do I
believe that UML is intended to be used for consolidation purposes.  Rather it
is a good development tool for kernel-level code, and it provides a very nice
isolated environment where experimenting on a physical box would be more
expensive (in time and effort).

Kris

On Thu, Sep 25, 2003 at 06:09:34PM +0200, Fabrice Vallet wrote:
 Hi list,

 We have plan to implement Linux on zSeries ...

 For me there is no doubt about the solution but some people who works on
 Linux Intel platform want to use UML instead ...

 I'm a mainframer and i don't now anything about UML ...

 Do you know the Pros and Cons ...

 Regards,

 Vallet Fabrice
 PSA Peugeot Citrokn
 Direction des Systhmes d'Information
 Service INSI/ETSO/MVS
 Site de Poissy - CTI d'Achhres
 Til. 01 30 19 21 79 (29)
--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.


Re: z/VM and VMware

2003-09-21 Thread Kris Van Hees
On Sun, Sep 21, 2003 at 08:22:11PM +0200, Rod Furey wrote:
   The Plex86 project (now deceased)

 Not true: Kevin's resurrected it as a Linux hypervisor over
 at Sourceforge whilst the original over at Savannah is
 still alive due to the license decision that Kevin made.
 Not that both are exactly a hive of activity that is...

I should clarify myself a bit...  I meant that the original Plex86 project is
deceased in the sense of being inactive (last couple of months CVS update on
the source tree has been completely empty), and having been abandoned by the
original author or so it seems.  His new work is quite different because it's
no longer intended to provide a virtual PC to the user, but rather a
specialized implementation to run Linux on x86.  I would have loved the older
work to continue further, but the complexity may have been a killing blow :(

 This is probably the closest that anyone's going to come to
 a VM-capable equivalent for Linux. (Note: VM provides far more
 than virtualisation, hence VM-capable.)

Indeed.

Kris


Re: z/VM and VMware

2003-09-20 Thread Kris Van Hees
On Sat, Sep 20, 2003 at 02:01:41PM +0100, Albert Schwar wrote:
  Kris ... you hit it ...
 
   As far as I understand VMWare, it is not an emulator, actually,
   but a type of hypervisor that uses many dirty tricks to provide a
   virtual PC on top of a host PC.  ...

 Correct. It does not provide virtual machines.
 You are limited to the specified versions of specified OSes.
 May be useful if you want to run some backlevel stuff
 while migrating.

Actually, this is not entirely correct either, but it does point at a fairly
big limitation in VMWare (and x86 capability for being virtualized)...  VMWare
actually can run more OSes (and versions of OSes) than those 'supported' by
VMWare.  E.g. I had Plan9 running (in text mode, due to not having the specs
for the video card VMWare emulates) on VMWare years ago, and now Bell Labs has
is running with GUI.  But it is not at all supported by VMWare, and VMWare does
not contain any code that is customized to make it possible.

The main issue is that VMWare's virtualization is not a perfect virtual copy
of the underlying hardware, but rather a close derivative.  The Plex86 project
(now deceased) had a very good explanation on the issues involved with trying
to virtualize the x86 architecture.

In comparison to z/VM, the biggest limitation I see (functionality wise) is the
limited accessibility to underlying hardware resources.  If VMWare would be able
to run as an OS on the hardware, while providing its guest instances with things
like assigning processor affinity, and accurate CPU utilization control, we'd
be in a much happier place :)

Granted, it is still a long stretch from z/VM, but when you're not really into
writing your own OS on the hardware, VMWare does the job pretty well.  I've not
ran into any Linux/*BSD/Plan9 version I've tried that did not run well on VMWare
which makes it (for *my* purposes) a perfectly decent virtual platform to do OS
testing/development on.  For running production servers on it, I'd be more
hesitant, because of the aforementioned limitations on how you can control the
host resources amongst multiple VMWare guest instances.  I sincerely hope that
VMWare (perhaps with input from IBM, since they partnered) will put some work
into that realm.

Kris

PS: User Mode Linux is another one mentioned in this thread...  It's definitely
something very different because it truly only provides you with a limited
user-level Linux instance rather than providing a virtual machine that you
can run an OS on.
--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.


Re: z/VM and VMware

2003-09-19 Thread Kris Van Hees
On Fri, Sep 19, 2003 at 10:53:45PM +0100, Albert Schwar wrote:
 They have nothing common.
 With VM you get virtual machines.
 With VMware you get an emulator which is very restricted.

As far as I understand VMWare, it is not an emulator, actually, but a type of
hypervisor that uses many dirty tricks to provide a virtual PC on top of a
host PC.  That's why it runs close to the host processor's speed for most basic
execution.  It's when privileges instructions need to be executed that it gets
hit (the x86 architecture doesn't make virtualization easy).  So in that sense
it has similarities to z/VM.

I think that the main (big) differences mainly result from the fact that z/VM
can depend on real hardware/microcode) support for virtualization, whereas
VMWare has to play dirty tricks.  In addition to this, z/VM provides its
services as an OS on top of the hardware, whereas VMWare runs as an application
on an underlying OS (as far as I have been able to determine, even the ESX
version does this, with a minimal Linux underneath VMWare itself).

Kris
--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.


Re: Porting Problems

2003-08-14 Thread Kris Van Hees
It will depend on what the MUMPS compiler generates.  The only ones I know of
(fortunately) generate actual C code themselves, so you'd most likely have a
fairly easy job in getting it to run on Linux S/390.  You mainly need to
compile the MUMPS compiler sources themselves on Linux S/390 (and resolve any
porting issues that might pop up with that), and then make sure that the C
output from the MUMPS compiler is again compiled with the C compiler on Linux
S/390.  That may turn out to have a few thorny issues, in that it is not at all
inconceivable that the MUMPS compiler may produce C code that is not 100%
portable.

If that is the case, you're in for a bigger task (changing the MUMPS compiler
to generate more portable code).

If your MUMPS compiler does *not* generate C source code, then you have a nice
big compiler development project at hand.

Kris

On Tue, Aug 05, 2003 at 02:32:51PM -0700, Yogish wrote:
 Hi
 We do have the source of the mumps compiler and it is written in C language
 .

 Yogish
 - Original Message -
 From: Adam Thornton [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Tuesday, August 05, 2003 1:30 PM
 Subject: Re: Porting Problems


  On Tue, 2003-08-05 at 16:28, Yogish wrote:
   Hi all
I am having problems porting GT.M application (MUMPS compiler
 +database) from an intel platform to s390 system. All the binaries and
 compiler are written for intel platform. I was wondering if there any other
 solution to it than rewriting the entire compiler for s390.
   waiting for your replies
 
  Do you have the source for the compiler?  What language is *it* written
  in?
 
  Adam
 
 

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.


Re: zSeries performance heavily dependent on working set size

2003-08-14 Thread Kris Van Hees
What you describe is a very common problem with OSes that implement virtual
memory. It's typically pretty much OK when your program's data space fits in
real memory, but once you run beyond that, performance will most definitely
be much worse when your inner loop runs through all the pages.  After all, you
are hitting a page fault at a lot of pages, causing major paging activity.
And when the working set itself is generally larger than available real memory,
you end up paging out earlier touched pages in favour of new pages, only to
reverse that again when the next offset loop starts.  You just keep cycling
through the pages being swapped in and out.

As far as I know, no OS with virtual memory has a real solution to this because
there is no way for the OS to know how to solve the problem.  It's a bad design
on the programmer's part :)  Some performance issues can only be solved by
using algorithms implemented in a way that takes these issues into account.

Kris

On Mon, Aug 11, 2003 at 11:17:15AM -0700, Jim Sibley wrote:
 In following up on some performance problems on the
 zSeries, we've noticed that the zSeries is very
 sensitive to working set size, especially for writes.
 This may explain some of the poor performance that
 people ascribe to the zSeries.

 Is locality of reference as senstive on other
 platforms? How does this effect such things as VM EC
 guests, data base programs with large tables, and java
 programs?

 Consider the loop below:

 It riffles through memory sequentially a byte at a
 time. It writes (flag=1) to each byte within a page
 before going to the next page. In other words, it has
 a nice worksing set size. (See complete program
 below for details).

 byteSingle=255;
 time(start);
   for (i=1;i=iter;i++) {
  hits=0;
 for (page=0;pagebytes;page=page+4096)
  for (byteINpage=0;byteINpage4096;byteINpage++)
 {
hits++;
byteAddr=page+byteINpage;
if (flag) byteArray[byteAddr]=byteSingle;
else byteSingle= byteArray[byteAddr];
}
 }
 time(finish);

 Consider what happens when you reverise the page and
 byteINpage loops:

  for (byteINpage=0;byteINpage4096;byteINpage++)
 for (page=0;pagebytes;page=page+4096)

 where you touch a byte in each page before going to
 the second byte. The working set becomes terrible.
 And so does the performance.

 Complete program:

 /*
   MemSeq - memory loop, reading or writing,
 sequentially

   [EMAIL PROTECTED]
   this is uncopyrighted material, August 5, 2003
   use at your own risk

 */
 #include stdlib.h
 #include stdio.h
 #include time.h

 void usage();
 int main(
   intargc,
   char * argv[])
 {
 char   byteArray[ 512*1024*1024 ];
 char   byteSingle;
 inti,j;
 intnext;
 intmb=1;
 intbytes, byteAddr, byteINpage
 int hits;
 intpage;
 intiter=0;
 intflag=0;
 time_t start;
 time_t finish;

 /* pick apart args */
   for (next = 1; next  argc; next++)
   if (memcmp(argv[next], -m, 2) == 0)
  {
mb = atol(argv[next][2]? argv[next][2]:
 argv[++next]);
  }
 else if (memcmp(argv[next], -i, 2) == 0)
  {
iter = atol(argv[next][2]? argv[next][2]:
 argv[++next]);
  }
 else if (memcmp(argv[next], -w, 2) == 0) flag=1;
 else if (memcmp(argv[next], -r, 2) == 0) flag=0;
 else usage();

 bytes = mb*1024*1024;

 printf(MemSeq );
 if ( flag ) printf(Write);
 else printf(Read  );

 printf(  %d MB, %d iterations , mb,iter);

 /* initiliaze memory to something other than all zeros
 */
 j=0;
 if ( iter  0)
for (i=0;ibytes;i=i++) {
   byteArray[i]=j;
   j++;
   if (j == 256) j=0;
   }
 /* riffle through memory, writing or reading,
 sequentially */
 /* the page and byteINpage loops are reversed from
 MemPage */
 byteSingle=255;
 time(start);
   for (i=1;i=iter;i++) {
  hits=0;
 for (page=0;pagebytes;page=page+4096)
  for (byteINpage=0;byteINpage4096;byteINpage++)
 {
hits++;
byteAddr=page+byteINpage;
if (flag) byteArray[byteAddr]=byteSingle;
else byteSingle= byteArray[byteAddr];
}
 }
 time(finish);

 printf( %.0f seconds %d bytes/iter
 \n,difftime(finish,start),hits);

 }
 void
 usage()
 {
   printf( MemSeq USAGE:\n);
   printf(
 MemSeq [-i KB ] [-m MB [-w | -r] \n
  -i iterations (default=0 iterations) \n
   (default is 1 \n
  -m bytes to malloc in MB (default=1 MB)\n
  -w memory write\n
  -r memory read (default)\n
 );
   exit(1);
 }




 =
 Jim Sibley
 Implementor of Linux on zSeries in the beautiful Silicon Valley

 Computer are useless.They can only give answers. Pablo Picasso

 __
 Do you Yahoo!?
 Yahoo! SiteBuilder - Free, easy-to-use web site design software
 http://sitebuilder.yahoo.com

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the 

Re: Open Source Community doubts SCO's claims

2003-07-22 Thread Kris Van Hees
On Tue, Jul 22, 2003 at 11:57:55AM -0400, David Boyes wrote:
 Sigh. There goes my free business class upgrades8-(

That entirely depends on whether the airline uses Linux vs SCO vs M$ vs ...
Play your cards right, and they'll give you a free first class upgrade, with
red carpet treatment and all!
Play your cards wrong, and you might up in the cargo hold.  It does give more
leg room though!

Kris


Re: Make oddity [longish]

2003-07-21 Thread Kris Van Hees
On Mon, Jul 21, 2003 at 10:28:42AM -0600, Ferguson, Neale wrote:
 I have discovered something odd about make's behavior. I have the following
 test case:

I have one quick piece of information that may or may not be relevant:

 Nat
 |-Makefile
 +-/src1
   |-Makefile
   |-x11.c
   |-x12.c
 +-/src2
   |-Makefile
   |-x21.c
   |-x22.c

 The toplevel Makefile contains:

 all :
 @echo Process Nat/src1
 cd src1;$(MAKE) -$(MAKEFLAGS)
 @echo Process Nat/src2
 cd src2;$(MAKE) -$(MAKEFLAGS)

You should probably make those: $(MAKE) $(MAKEFLAGS)  If $(MAKEFLAGS) is empty
(as it is in your test case), you end up executing make - which does not
seem to have any defined behaviour (as in, the - argument is not descibed
anywhere as being valid).  I don't think it really ought to make any
difference, but I truly do not know for sure :)

Kris


Re: Make oddity [longish]

2003-07-21 Thread Kris Van Hees
Ok, my previous comment is not relevant (though it is still a bug in the
makefile anyway)...  But here is what is happening:

In the Makefile for src2, the default rule is ALL: $(lib) which means that
the default target depends on the $(lib) target which is defined as ../nat.a.
Since the Makefile in src1 already generated ../nat.a, that file is newly
created when Makefile in src2 is processed, and thus it is newer than the
source files x21.c and x22.c.  As such, the make process decides that nothing
needs to be done to create object files, which results in $? being empty (since
that variable contains only those dependencies that needed to be generated).

With your second version, you use an immediary target dummy, which forces the
evaluation of its dependencies since 'dummy' is not an actual file (thus it
does not have a timestamp that make can verify).

Kris

On Mon, Jul 21, 2003 at 10:28:42AM -0600, Ferguson, Neale wrote:
 I have discovered something odd about make's behavior. I have the following
 test case:

 Nat
 |-Makefile
 +-/src1
   |-Makefile
   |-x11.c
   |-x12.c
 +-/src2
   |-Makefile
   |-x21.c
   |-x22.c

 The toplevel Makefile contains:

 all :
 @echo Process Nat/src1
 cd src1;$(MAKE) -$(MAKEFLAGS)
 @echo Process Nat/src2
 cd src2;$(MAKE) -$(MAKEFLAGS)

 The src1/Makefile:

 lib=../nat.a

 files = x11.c \
 x12.c

 .c~.a: ;
 .c.a: ;
 .c~.o: ;
 .c.o: ;

 ALL : $(lib)

 $(lib) : $(lib)(x11.o) $(lib)(x12.o)
 ar rv $(lib) $?
 rm -f $?

 $(lib)(x11.o) : x11.o
 $(lib)(x12.o) : x12.o

 x11.o : x11.c
 rm -f x11.o
 $(CC) -c $(CFLAGS) x11.c

 x12.o : x12.c
 rm -f x12.o
 $(CC) -c $(CFLAGS) x12.c

 The src2/Makefile:

 lib=../nat.a

 files = x21.c \
 x22.c

 .c~.a: ;
 .c.a: ;
 .c~.o: ;
 .c.o: ;

 ALL : $(lib)

 $(lib) : $(lib)(x21.o) $(lib)(x22.o)
 ar rv $(lib) $?
 rm -f $?

 $(lib)(x21.o) : x21.o
 $(lib)(x22.o) : x22.o

 x21.o : x21.c
 rm -f x21.o
 $(CC) -c $(CFLAGS) x21.c

 x22.o : x22.c
 rm -f x22.o
 $(CC) -c $(CFLAGS) x22.c

 (The x11.c files etc. contain int x11() { return 0; })

 If I run make from the top level I see:

 Process Nat/src1
 cd src1;make -
 make[1]: Entering directory `/FS/fs0300/usanefe/Nat/src1'
 rm -f x11.o
 cc -c  x11.c
 rm -f x12.o
 cc -c  x12.c
 ar rv ../nat.a x11.o x12.o
 a - x11.o
 a - x12.o
 rm -f x11.o x12.o
 make[1]: Leaving directory `/FS/fs0300/usanefe/Nat/src1'
 Process Nat/src2
 cd src2;make -
 make[1]: Entering directory `/FS/fs0300/usanefe/Nat/src2'
 ar rv ../nat.a
 rm -f
 make[1]: Leaving directory `/FS/fs0300/usanefe/Nat/src2'

 If I change the lower level makefiles to be:

 lib=../nat.a

 files = x11.c \
 x12.c

 .c~.a: ;
 .c.a: ;
 .c~.o: ;
 .c.o: ;

 ALL : dummy

 dummy : $(lib)(x11.o) $(lib)(x12.o)
 ar rv $(lib) $?
 rm -f $?

 $(lib)(x11.o) : x11.o
 $(lib)(x12.o) : x12.o

 x11.o : x11.c
 rm -f x11.o
 $(CC) -c $(CFLAGS) x11.c

 x12.o : x12.c
 rm -f x12.o
 $(CC) -c $(CFLAGS) x12.c

 I then get:

 Process Nat/src1
 cd src1;make -
 make[1]: Entering directory `/FS/fs0300/usanefe/Nat/src1'
 rm -f x11.o
 cc -c  x11.c
 rm -f x12.o
 cc -c  x12.c
 ar rv ../nat.a x11.o x12.o
 a - x11.o
 a - x12.o
 rm -f x11.o x12.o
 make[1]: Leaving directory `/FS/fs0300/usanefe/Nat/src1'
 Process Nat/src2
 cd src2;make -
 make[1]: Entering directory `/FS/fs0300/usanefe/Nat/src2'
 rm -f x21.o
 cc -c  x21.c
 rm -f x22.o
 cc -c  x22.c
 ar rv ../nat.a x21.o x22.o
 a - x21.o
 a - x22.o
 rm -f x21.o x22.o
 make[1]: Leaving directory `/FS/fs0300/usanefe/Nat/src2'

 Apparently, other makes (i.e. non GNU make) do not exhibit this behavior
 (i.e. the 1st version of the makefile will produce the desired results).

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.


Re: Make oddity [longish]

2003-07-21 Thread Kris Van Hees
On Mon, Jul 21, 2003 at 02:06:24PM -0400, Ferguson, Neale wrote:
 But the generation of x21.o and x22.o will cause them to have a timestamp
 greater than that of ../nat.a and should cause make to do the build. x21.o
 and x22.o are listed a dependencies of nat.a.

I think that make is indeed doing the wrong thing here.  Can you email me
privately with output from make -d (it will be long) for this?  It should
be sufficient to just do cd src2;make -d $(MAKEFLAGS), so you don't get
debug output for all three makes.

Kris


Re: X11 vs VNC

2003-06-16 Thread Kris Van Hees
I'd say that it really depends on what you intend to do.  VNC gives you access
to the entire X display as if you were sitting behind the machine with X running
on the console (quite impossible on zLinux of course).  Using X11 on your local
PC, you would run X clients on the zLinux instance, letting them display your
actual windows on your local PC's X server.  That way multiple users can have
X clients running on the zLinux instance.

So it really boils down to what you want to do.  If you need access to the real
X display as it would be on a Linux box, VNC is the way to go, because it is
really just a passthrough service to let you control an X display remotely.  If
you just need to run X clients on the zLinux instance, and have them display on
your local machine, X11 itself is the way to go (products like Hummingbird's
eXceed, and other do that).

Hope this helps.

Kris

On Mon, Jun 16, 2003 at 03:24:26PM -0500, Tom Duerbusch wrote:
 I'm getting too many people talking at meso it's time to bring it
 out to the people that are in the know.

 Some of our Linux users want to use the X11 interface to access
 Linux/390 (Suse 8 under z/VM).  Some of them have software already
 installed on their Windows PCs and use it for other platforms.  Others
 just need software installed.  And I have to have a X11 server
 installed.

 Now that I'm on Suse 8, and the install method kind of force the issue,
 I have VNCSERVER installed.  On the PC, I have vncviewer installed.  It
 seems to me that I have a X11 interface running.

 But I'm told by some folks here, that that isn't X11, vnc isn't
 compatable with X11 and there is different software for X11.

 I've been told that I need a product such as cygwin.  I've gone to the
 website and when I try to download it, boy is there a lot of stuff that
 would be downloaded.  (perhaps I got the options wrong...I just took
 whatever defaults there was)  This makes me believe that cygwin and
 vncviewer are quite different animals.

 Any comments?

 Tom Duerbusch
 THD Consulting
 (trying to limit the number of packages installed, and maintained that
 do the same thing)

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.


Re: Linux-390 in South Africa

2002-12-11 Thread Kris Van Hees
On Wed, Dec 11, 2002 at 08:31:59AM -0600, Rich Smrcina wrote:
 Welcome aboard Heinrich!

 I can't speak for OS/390, but the installation process for z/VM boils down to
 a one page document that is designed for folks that are just beginning with
 z/VM or just want all of the defaults.  I don't think the document is
 distributed anywhere, but a number of folks have use it and the word is that
 it is quite easy.

You can find the installation summary as a PDF on the z/VM V4R3.0 base
publication webpage, at http://www.vm.ibm.com/pubs/pdf/vm430bas.html.  The
document in question is the z/VM V4R3.0 Installation Summary, and the URL
for its download is http://www.vm.ibm.com/pubs/pdf/v4r3isum.pdf.

Hope this helps.

Kris



Re: More NSS Info

2002-11-08 Thread Kris Van Hees
Would it not be sufficient to create the NSS with just the boot disk and maybe
swap configured in on the kernel parameter line, and then using something very
early on in the boot process to add the other disks using /proc/dasd/devices?
It might take some work to get the NSS and RO boot disk just right for this to
work, but it would make it a lot more flexible.

Kris

On Fri, Nov 08, 2002 at 10:43:15AM -0500, Adam Thornton wrote:
 I don't have the faintest idea why IBM claims that you have to have an
 identical DASD layout on all machines that share an NSS.

 Admittedly cursory testing seems to show that your NSS will have
 whatever parameter line you burned into it, which does specify a range
 of devices.  But not only can those devices change size (I tested this
 with an ext3 and a swap filesystem), if you boot without a listed
 device, the only problem you will have that I could find was that you
 may trip over it in /etc/fstab.

 But if you have a disk that's not in /etc/fstab, which you detach before
 IPL, you can re-link and then access that disk pefectly normally from
 Linux (using the console or hcp to perform the link).

 So it's looking to *me* like you should pick a lowest-common-denominator
 disk layout (for most of our guests, that'd be / on 150, swap on VDISK
 on 151, and /usr on 152), build the NSS with as small a storage size as
 you can (24M works for us) and then not worry about it.

 If anyone can tell me why I'm wrong, and that, although I have mounted
 differently-sized disks, I'm heading for fatal filesystem corruption
 just around the corner, I'd appreciate it.

 Adam

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.



Re: More NSS Info

2002-11-08 Thread Kris Van Hees
If you put something like cmsfs or hcp on the root disk, you should have enough
to read a config file from the CMS A-disk and use information in there to do
the dynamic configuration of the disks.

Despite what Sun Microsystems did with linking /usr/bin and /usr/sbin into the
root filesystem as /bin and /sbin, a more sensible setup is still to have the
core utilities that are required to boot a system (and to do basic maintenance)
as part of the actual root partition.  I've banged my head against that
stupidity in Solaris more than once when a disk that held /usr happened to die.
Especially when you do not have a CDROM to boot the install media from.

Kris

On Fri, Nov 08, 2002 at 11:23:25AM -0500, David Boyes wrote:
 You would need at least one non-root/swap address mounted as /config or
 something for storing the configuration of what goes where, and you'd
 have to move at least a few of the utilities (eg mount, ifconfig, etc)
 from /usr to /sbin (generating statically linked versions) and include
 /sbin in the root filesystem.

 Much as I dislike Solaris, their diskless workstation filesystem layout
 is a pretty good model for this. We should use that as a model for
 ideas.

 -- db

 David Boyes
 Sine Nomine Associates


  -Original Message-
  From: Linux on 390 Port [mailto:LINUX-390;VM.MARIST.EDU]On Behalf Of
  Kris Van Hees
  Sent: Friday, November 08, 2002 11:00 AM
  To: [EMAIL PROTECTED]
  Subject: Re: More NSS Info
 
 
  Would it not be sufficient to create the NSS with just the
  boot disk and maybe
  swap configured in on the kernel parameter line, and then
  using something very
  early on in the boot process to add the other disks using
  /proc/dasd/devices?
  It might take some work to get the NSS and RO boot disk just
  right for this to
  work, but it would make it a lot more flexible.
 
  Kris
 
  On Fri, Nov 08, 2002 at 10:43:15AM -0500, Adam Thornton wrote:
   I don't have the faintest idea why IBM claims that you have
  to have an
   identical DASD layout on all machines that share an NSS.
  
   Admittedly cursory testing seems to show that your NSS will have
   whatever parameter line you burned into it, which does
  specify a range
   of devices.  But not only can those devices change size (I
  tested this
   with an ext3 and a swap filesystem), if you boot without a listed
   device, the only problem you will have that I could find
  was that you
   may trip over it in /etc/fstab.
  
   But if you have a disk that's not in /etc/fstab, which you
  detach before
   IPL, you can re-link and then access that disk pefectly
  normally from
   Linux (using the console or hcp to perform the link).
  
   So it's looking to *me* like you should pick a
  lowest-common-denominator
   disk layout (for most of our guests, that'd be / on 150,
  swap on VDISK
   on 151, and /usr on 152), build the NSS with as small a
  storage size as
   you can (24M works for us) and then not worry about it.
  
   If anyone can tell me why I'm wrong, and that, although I
  have mounted
   differently-sized disks, I'm heading for fatal filesystem corruption
   just around the corner, I'd appreciate it.
  
   Adam
 
  --
  Never underestimate a Mage with:
   - the Intelligence to cast Magic Missile,
   - the Constitution to survive the first hit, and
   - the Dexterity to run fast enough to avoid being hit a second time.
 

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.



Re: More NSS Info

2002-11-08 Thread Kris Van Hees
On Fri, Nov 08, 2002 at 01:15:01PM -0500, Adam Thornton wrote:
 On Fri, Nov 08, 2002 at 10:58:30AM -0600, Rick Troth wrote:
   If you use the cmsfs stuff, that information can all be on the
   191 disk and read by the startup scripts.
  What about a CMSFS that can do directories and specials (device files)
  akin to the UMSDOS hack?

 If you create a CMS file called PROGRA~1 DIR I'll have to murder you.
 Just so you know.  Other than that, sure, sounds like a plan--I assume
 you mean that you use some filesystem convention like a file which
 always has some particular name, which contains a CMS filename to Unix
 directory mapping?

Perhaps (though it might be more work) use the same naming convention as is
used to encode long filenames on the ISO9660 filesystem, just to stick to a
well known and accepted convention?

I would *love* to see a CMSFS that can support things like device files so
we can finally put /dev somewhere other than the root filesystem, so / can
truly be made RO.  I worked on that using initrd, but cmsfs  would  be  so
much nicer (as long as it is efficient).  Then again, if devfs keeps going
the way it is, the need for a writable /dev may finally disappear!   I  do
fear though that it won't go all the way :(

Kris



Re: More NSS Info

2002-11-08 Thread Kris Van Hees
On Fri, Nov 08, 2002 at 02:16:47PM -0500, Matt Zimmerman wrote:
 On Fri, Nov 08, 2002 at 12:24:13PM -0500, Kris Van Hees wrote:
  I would *love* to see a CMSFS that can support things like device files so
  we can finally put /dev somewhere other than the root filesystem, so / can
  truly be made RO.  I worked on that using initrd, but cmsfs  would  be  so
  much nicer (as long as it is efficient).  Then again, if devfs keeps going
  the way it is, the need for a writable /dev may finally disappear!   I  do
  fear though that it won't go all the way :(

 /dev doesn't prevent / from being read-only.  devfs sidesteps this problem
 entirely, and with a normal /dev, device nodes are generally static.  If you
 consider it to be useful, it is not too complex to run Linux with a
 read-only root filesystem.

I worked on a RO / before (presented briefly at SHARE in TN), and unfortunately
Linux has (or had - they may have fixed it) a C library that usesthe Unix
domain socket /dev/log for syslog handling, and that one is created dynamically
at each boot.  The introduction of devfs and possible changes to glibc may
have removed that limitation (which would be a GOOD thing).

Kris



Re: Proc File system on v-disk

2002-10-09 Thread Kris Van Hees

On Wed, Oct 09, 2002 at 07:45:02AM -0400, Davis, Lawrence wrote:
 Is the Proc FS hit a lot during normal Linux operations it would seem to be
 a good candidate for V-disk.

The /proc filesystem is a virtual file systems, only really existant in core
memory.  There is no actual file storage on any backing store for it.  As you
can see from the /etc/fstab, it is specified without an actual disk partition
as first parameters on its definition line, which means that there is indeed
no disk space allocated for it (nor does it use any).  As such, moving it to
a V-disk is actually impossible.  Since it is served from storage by the kernel
it is about as fast as it can get.

Kris



Re: How to fix early boot problem

2002-07-03 Thread Kris Van Hees

On Tue, Jul 02, 2002 at 09:50:57PM -0500, Tom Duerbusch wrote:
 1.  How to you edit a file with a line mode terminal?  Is it available
 very early in the boot process?

Well, ed is your friend.  Or there may be wed or awk for stream edit operations.

 2.  Is there a way of mounting Linux4 150 to another Linux image and fix
 the files from another system?

That ought to be perfectly possible.  Any disk formatted for Linux can be
attached to another Linux image and mounted there as a filesystem.

 3.  Is there a better way of fixing this problem.  That is without
 restoring the entire Linux image, or clubbing myself in the head so I
 don't cause the problem in the first place?

In general, I do not know of any other way to solve this than the two ways you
have listed above.  Well, there is of course the possibilty to just mount all
the filesystems by hand based on what you remember to be the contents of the
/etc/fstab file (or based on whatever you feel appropriate at the time).  And
that will bring the system to a state where you should be able to start
services again and then login from somewhere so that you can use vi or any
other editor you prefer.

In all, I still prefer No. 2 and No. 1 respectively.

Kris



Re: [OT] Neale's effective use of irony and sarcasm

2002-06-14 Thread Kris Van Hees

On Fri, Jun 14, 2002 at 08:58:40AM -0400, Scott Courtney wrote:
right with my wife, who does not. They do not have English muffins.

That is not entirely surprising, since somehow there seems to be a very
distinct trend to call things by the wrong name.  Whomever coined the
term 'French fries' ought to be prosecuted for commiting one of the most
hideous crimes in human history.  There is nothing French about them, and
the French do not even call them French.

Then again, similar to the English muffin, Belgian waffles rarely resemble
anything that you can order in the US when asking for a Belgian waffle.  I
am often so tempted to ask whether their Belgian waffles is a 'Luikse wafel',
'Brusselse wafel', 'Brugse wafel', and so on...  They are all different.

At least Belgian beer is not being called anything else!  Except in extreme
cases, such as asking at an Applebee's what Belgian beers they have, the
waiteress going to the bar to check and coming back stating that the only
Belgian beer they have is Heineken.  Yuck!!!  Ordering a coke was the best
thing to do in that case.

Kris



Re: bootshell-SMSG not authorized

2002-06-12 Thread Kris Van Hees

I might be shooting in the dark here, but do you possibly mean to send the
command:

SEND SYSSOFT1 HALT

instead?

Kris

On Mon, Jun 10, 2002 at 03:30:42PM -0500, Hank Calzaretta wrote:
 I'm trying to implement Mike Kershaw's bootshell program in order to
 shutdown Linux instances from the VM operator console.
 I issue the following command from OPERATOR:

 SMSG SYSSOFT1 HALT

 I get the following response:

 HCPMFS057I SYSSOFT1 not receiving; not authorized

 I did a SET SMSG ON and SET SECUSER OPERATOR in the profile exec for the
 Linux user.  Output from the Q SECUSER and Q SET commands issued on the
 SYSSOFT1 user follow:


 CP Q SECUSER
 SECONDARY USER OPERATOR IS LOGGED ON

 CP Q SET
 MSG ON  , WNG ON  , EMSG ON  , ACNT OFF, RUN ON
 LINEDIT ON , TIMER OFF , ISAM OFF, ECMODE ON
 ASSIST OFF   , PAGEX OFF, AUTOPOLL OFF
 IMSG ON  , SMSG ON  , AFFINITY NONE   , NOTRAN OFF
 VMSAVE OFF, 370E OFF
 STBYPASS OFF   , STMULTI OFF   00/000
 MIH OFF , VMCONIO OFF , CPCONIO OFF , SVCACCL OFF , CONCEAL OFF
 MACHINE ESA, SVC76 CP, NOPDATA OFF, IOASSIST OFF
 CCWTRAN ON , 370ACCOM OFF


 What have I missed?

 Thanks,
 Hank Calzaretta
 Wallace Computer Services, Inc.

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.



SLES7 gone?

2002-04-10 Thread Kris Van Hees

Does anyone have any information on the status of nozzle.suse.de?  It is no
longer present on the network it seems, and I cannot find any other location
where an SLES7 beta can be downloaded.  It makes life somewhat difficult since
I am trying to test whether something will compile OK under SLES7, and it
needs a library that I did not install when I originally used the SLES7 beta.

Anyone?

Kris



Re: CTRL_ALT_DEL

2002-03-26 Thread Kris Van Hees

This is usually handled by the ca entry in the /etc/inittab file of your
system.  E.g. mine says: ca:12345:ctrlaltdel:/sbin/shutdown -t1 -a -r now
which signals an orderly shutdown if I use the Ctrl-Alt-Del combination.

Kris

On Tue, Mar 26, 2002 at 11:43:11AM -0700, Ferguson, Neale wrote:
 I am playing around with automated shutdown and am trying to
 use the ctrl_alt_del() routine to achieve the halt. If this
 routine is entered it checks a global variable C_A_D which by
 default is set on. However, during startup of my SuSE system
 it is turned off (via syscall sys_reboot). If I force it on
 and drive the routine I get a system restart without any
 orderly shutdown such that my filesystems need checking etc.
 What options do I have to change this behavior (a call to
 sys_reboot with the correct parameters can cause halt or
 power-off actions to be taken but are there more inventive
 things I can do)?

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.



Re: your mail

2002-03-26 Thread Kris Van Hees

On Tue, Mar 26, 2002 at 04:24:51PM -0500, Bruce Hayden wrote:
 I'd guess from the STSI instruction - it says it returns up to 8 levels.

I wonder whether this is not released yet.  The experimental code for 2.4.17
only displays up to the 3rd level, although the code does indeed use the STSI
instruction to get its information.

Kris



Re: XFree86 4.2.0 -- Noted in passing

2002-03-25 Thread Kris Van Hees

On Mon, Mar 25, 2002 at 05:16:24PM -0500, Adam Thornton wrote:
 Except that, of course, no one knows about that yet, because I haven't
 released it.  So Scott must just have been talking about L/390 on Herc
 under L/390 on VM.

The *least* you could have done was add in Bochs somewhere in that list.  Since
it is a well known fact that running Hercules under L/390 is nonsense, amd that
it is much better to run it on x86, you should have installed Bochs on L/390 on
VM.  Then you can run Hercules for x86 in Bochs, etc...

Homework: What machine should you run VM on in order to still get more than
  1 BogoMIPS in L/390 on Hercules under Bochs under L/490 on VM?

Kris



Re: How to shrink a VM mini disk file system prior to cloning?

2002-03-14 Thread Kris Van Hees

You could allocate a new minidisk for the root file system, and simply copy
over all the files using tar.  Then you can use that new root file system the
ipl device using zipl or silo.  Copying the files on the root file system with
tar will preserve links etc...

Kris

On Thu, Mar 14, 2002 at 10:27:49AM -0500, Gustavson, John (ECSS) wrote:
 We installed the 2.4.7 kernel on 5 full pack mini disks as follows:

 /dev/dasda1
 /dev/dasdb1
 /dev/dasdc1   lvmsuse
 /dev/dasdd1   lvmsuse
 /dev/dasde1   lvmsuse

 Our parmfile is:
 dasd=0200,0100,0101,0102,0103 root=/dev/dasdb1 noinitrd

 so dasda1 is the swap file, and dasdb1 is the root, and dasdc1 starts our LVM, where 
we have /usr mounted.

 I know there are ways to shrink the LVM, but what would be the best way to shrink 
down our dasdb1 where our root is mounted?  This device is a full pack mini 
disk(3390), and it is way over-allocated
 for what it is actually using.
 In OS/390 USS, for a similar problem, we did a pax on /, and unpaxed to a smaller 
file system.  I don't see pax as part of our operating system, but maybe there is a 
better way to do this.


 Enterprise Central Software Services (ECSS)
 570 Washington Street - 2nd floor
 New York, New York, 10080-6802

 Telephone: 1-212-647-3793
 Fax: 1-212-647-3321
 Email: [EMAIL PROTECTED] mailto:[EMAIL PROTECTED]

--
Never underestimate a Mage with:
 - the Intelligence to cast Magic Missile,
 - the Constitution to survive the first hit, and
 - the Dexterity to run fast enough to avoid being hit a second time.



Bug in Linux S/390 gcc?

2002-02-07 Thread Kris Van Hees

While digging through a problem with OpenAFS on Linux S/390, I found that gcc
uses a builtin version of the ffs() (find-first-set bit in a word) function,
unless -ansi or -fno-builtin is used as option.  That builtin version of the
ffs() function seems to be inconsistent with versions found in the Linux
kernel (asm-s390/bitops.h) and in the C library (verified against glibc-2.2.4).
It is supposed to return the bit position of the first bit that is set in the
given word, with LSB being 1, and MSB being 32.

The implementation that is used for the gcc builtin version returns 32 when
0x8001 is passed in as word, whereas both the kernel implementation and
the C library version return 1 as expected.

Is there a specific reason for the current implementation in gcc, or is it an
actual bug?  If so, who can fix it? :-)

Kris



Re: OT: editor discussions

2002-01-31 Thread Kris Van Hees

On Thu, Jan 31, 2002 at 11:42:44PM -0500, Adam Thornton wrote:
 Oh, lay off the cough syrup, David.

 Of *course* Teco exists for Linux:
 ftp://ftp.mindlink.net/pub/teco/ptf_teco.tar.gz

 I just changed the -ltermcap to -lncurses in the Makefile and it was
 fine.

But did you also end up with a proper calculation of 'pi'?  One has to verify
the correctness of programs to the fullest extend of one's capabilities.  And
to start with a proven test is being ahead of the game.

Kris

PS: Anyone around that has the Towers of Hanoi for vi?



Re: RH 7.2 on IS ?

2002-01-12 Thread Kris Van Hees

On Thu, Jan 10, 2002 at 05:12:49PM -0500, Adam Thornton wrote:
 Anyone tried to install RH 7.2 on an IS?

 I boot from the reader fine, but then everything just seems to hang when
 I run loader after telnetting in.

 I don't have any /dev/dasdX devices, which seems strange, although the
 initial dasd probe seems to have worked.

I found that when trying to install RH7.2, when the ctc0 device is loaded, it
is set with an enormously high MTU value.  After setting it down to the 1492 I
was using on the other end, it worked fine, though I had a problem later on
when anaconda got loaded.  It displayed the language selection screen and then
went belly up (as in it was not responding to any keyboard input whatsoever).
I went back to using rhsetup and that worked fine.  When I have more time, I'll
check into that anaconda issue a bit more probably.

So... the trick is to use ifconfig ctc0 mtu  right before you start loader
or rhsetup, and you'll be fine.

Kris