Re: Hipersockets and Linux...

2021-03-08 Thread Vic Cross
Frank wrote:

>Okay, following along in "IBM HiperSockets Implementation Guide"
>Chapter 3 "Software configurations for HiperSockets. The command
>"ifconfig enccw0.0.0800 192.168.250.88 netmask 255.255.255.0 up"
>seems to work, only temporarily.
>
># ping 192.168.250.101
>PING 192.168.250.101 (192.168.250.101) 56(84) bytes of data.
>64 bytes from 192.168.250.101: icmp_seq=1 ttl=255 time=0.362 ms
>64 bytes from 192.168.250.101: icmp_seq=2 ttl=255 time=0.313 ms
>
>Then just a few seconds later:
># ping 192.168.250.101
>PING 192.168.250.101 (192.168.250.101) 56(84) bytes of data.
>From 32.140.72.137 icmp_seq=9 Packet filtered
>From 32.140.72.137 icmp_seq=19 Packet filtered
>From 32.140.72.137 icmp_seq=20 Packet filtered

Your ifconfig has proven that the interface works, which is great, but using 
ifconfig (or any of the manual interface config commands, such as ip) is a very 
temporary config method. At best it lasts until the next reboot, but these days 
often not even that long.

I suspect that NetworkManager is trying to configure the interface for you. Its 
standard treatment of unconfigured interfaces is to use DHCP to get an address, 
so it's probably the DHCP client that is clearing what you do with ifconfig. 
You should have an ifcfg-* file in /etc/sysconfig/network-scripts/ for your 
OSA/VSwitch interface... Copy this file to ifcfg-enccw0.0.0800 and make all the 
required changes inside there (you can delete the UUID line, if you still have 
an OSA/VSwitch to get to most things then delete the GATEWAY line, and if you 
want to talk to z/OS make sure that in the OPTIONS line you have "layer2=0"). 
Then restart NetworkManager (systemctl restart NetworkManager.service). NM will 
then see your HiperSockets as another "System" connection and manage it for you.

Regards,
Vic

--
Vic Cross
Solutions Engineer, Z Acceleration Team
IBM Z (Worldwide)
E-mail: viccr...@au1.ibm.com Twitter: @viccross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Sharing dasd between linux

2021-02-08 Thread Vic Cross
Rinaldo wrote:

>I read the page [
>https://www.ibm.com/support/knowledgecenter/SSB27U_6.4.0/com.ibm.zvm.
>v640.hcpa5/dsbmvm.htm |
>https://www.ibm.com/support/knowledgecenter/SSB27U_6.4.0/com.ibm.zvm.
>v640.hcpa5/dsbmvm.htm ] , and I was unsure if that could work on
>linux (more specifically RHEL 8). 
>If I share disk between 4 RHEL guests as mentioned in the link, all
>of them could read/write without risk of integrity? 

This page discuss only the z/VM I/O aspects of sharing a DASD between guests, 
and do not cover the issues of sharing a *filesystem*. Most filesystems have 
metadata that can be adversely affected by simultaneous access by multiple OS 
instances. Since you're asking for shared read-write this will definitely apply 
to you.

The standard Linux filesystems such as ext2/3/4 and xfs CANNOT do this. 
Corruption of the filesystem is very likely, almost guaranteed. You would need 
a cluster-aware filesystem such as OCFS2 or GFS, which also requires the 
cluster management middleware to maintain quorum and prevent split-brain. The 
extensions that provide these capabilities are available for all the supported 
distributions these days. IBM has a fine clustered filesystem (called Spectrum 
Scale) available for Z and LinuxONE as well. 

You may still need to do the things mentioned in that KC link to make it work, 
but it will be best to check with the documentation of the cluster filesystem 
you choose. RESERVE/RELEASE in the way our architecture does it might confuse a 
filesystem that provides its own arbitration of write access.

If you were just wanting shared read-only, it's generally much easier. ext2 
handled read-only without fuss, but the introduction of journalling in ext3 
meant that special action had to be taken around that. I don't have experience 
with read-only xfs but a quick search shows some positive results.

Regards,
Vic

--
Vic Cross
Solutions Engineer, Z Acceleration Team
IBM Z (Worldwide)
E-mail: viccr...@au1.ibm.com Twitter: @viccross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: RHEL8 Installation Problem

2020-12-23 Thread Vic Cross
Hi Neale,

This one comes from an Ansible playbook, so forgive the Jinja fields (but I 
guess it's sanitised, lol)

ro ramdisk_size=4 cio_ignore=all,!condev
rd.dasd={{ guest_install_dasd }}
rd.znet={{ guest_install_znet }}
rd.znet={{ guest_internal_znet }}
ip={{ guest_temp_ipaddr }}::{{ guest_install_gateway }}:{{ 
guest_install_netmask }}:{{ guest_install_hostname }}:{{ guest_install_nicid 
}}:none
nameserver={{ guest_install_nameserver }}
inst.repo={{ guest_install_baseurl }}/
inst.ks={{ guest_install_ksurl }}/{{guest_install_hostname }}.ks

Regards,
Vic


Vic Cross
Solutions Engineer, Z Acceleration Team
IBM Z (Worldwide)
E-mail: viccr...@au1.ibm.com Twitter: @viccross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www2.marist.edu/htbin/wlvindex?LINUX-390


Re: Universal Naming Convention....

2008-06-28 Thread Vic Cross
On Fri, 27 Jun 2008 11:55:05 am John Summerfield wrote:

 http lacks that convenience, but really comes into play when
 distributing files to remote, usually anonymous and untrusted, users.

It surprises and amazes me what shapes simple old HTTP has been bent into. :)
Thanks to extensions like WebDAV, HTTP provides a transport that does a lot
more than just serve static HTML.  SSL and TLS mean that it can do so
securely.

 The nearest Linux/Unix have is NFS, but again that lacks the convenience
 (from the users' viewpoint) of CIFS and AFP.

You're forgetting newer methods of delivering *information* rather than just
files.  This is what David is getting at: the servers that support UNC naming
only support pointing to a file and doing I/O.  URLs literally point at
information: files containing data, applications that can obtain data and
present it as a file, web services that can aggregate information from
various locations, and so on.

Also, don't forget that URL != HTTP.  A URL starts with the transport to be
used to satisfy the request: HTTP is just the most common use of URLs.
There's nfs://, ssh://, ftp://, and plenty more... and if you need to mimic
UNCs, there's smb:// (or cifs://) too.

 If you say, It's just sloppy architecture then you are judging
 yesterday's best practice by today's standards. It was created for LANs
 of small computers - Pentiums, and 486s and less.

Maybe it wasn't sloppy architecture in its day.  Tom's request landed in
2008 though, not in 1990.  For a new solution being implemented in an
organisation today technologies must be measured against what is available
today, and in that environment UNC naming has to be found wanting.

When this system goes live, I'd predict that one of the first questions that
will arise is why can't I view these documents on my ${PDA}?  I expect that
question would be a lot harder to answer if UNC is used...  ;-)

Cheerio,
Vic Cross

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Swap oddities

2007-10-29 Thread Vic Cross
On Sun, 28 Oct 2007 08:41:16 am Marcy Cortes wrote:

 So, if I'm understanding right, those would be dirty pages no longer
 needed hanging out there in swap?

That's right -- but you'll get arguments on the definition of no longer 
needed.  Having sent a page to the swap device, Linux will keep it out there 
even if the page gets swapped in.  The reason: if the page again needs to be 
swapped out, and it wasn't modified while it was swapped back in, you save an 
I/O (so the claim is that it's not that it's no longer needed, it's that 
it's not needed right now but might be again soon).

I read about this and other interesting behaviours at http://linux-mm.org -- 
it seems that the operation of Linux's memory management has generated enough 
discussion for someone to start a wiki on it. :)

The real issue in terms of VDISK is that even if we could eliminate the keep 
it in case we need it behaviour of Linux, there's no way for Linux to inform 
CP that a page of a VDISK is no longer needed and can be de-allocated.  Even 
doing swapon/swapoff, with an intervening mkswap, even chccwdev the thing off 
from Linux and back on again, won't tell CP that it can flush the disk -- 
AFAIK, only DELETE/DEFINE would do it.

 I thought the point of the priortized 
 swap was that it'd keep reusing those on the highest numbered disks
 before starting down to the next disk.  It was well into the 3rd disk
 (they are like 250M, 500M, 1G, 1G).   (at least I think it used to work
 that way!).  Could there be a linux bug here?

From what I've seen, Linux is working as designed unfortunately.  The 
hierarchy of swap devices was a theory (tested by others much more skilled 
and equipped than me, even though I drew the funny pictures of it in the 
ISP/ASP Redbook).  Regardless, it was only meant as an indicator for how big 
your *central storage* needs to be; as soon as the guest touched the second 
disk it was a flag to increase the central.  (Can't increase central?  Divide 
the workload across a number of guests.)  Ideally you *never* want to swap; 
having a swap device that's almost as fast as memory helps mitigate the cost 
of swapping, but using that fast swap is not a habit to keep up.

It's also quite possible that your smaller devices became fragmented and 
unable to satisfy a request for a large number of contiguous pages.  Such 
fragmentation would make it ever more likely that the later devices would get 
swapped-onto as your uptime wore on.

 Seems like vm.swappiness=0 (or a least a lower number than the default
 of 60) would be a good setting for Linux under VM. Has anyone studied
 this?

/proc/sys/vm/swappiness was introduced with kernel 2.6 [1].  The doco suggests 
that using swappiness=0 makes the kernel behave like it used to in the 2.4 
(and earlier) days -- sacrifice cache to reduce swapping.  I have seen SLES 9 
systems (with 2.6 kernels) appear to use far more memory than equivalent SLES 
8 systems (kernel 2.4), so from experience a low value is useful for the z/VM 
environment [2].

CMM is meant to be the remedy to all of this of course.  Now we can give all 
our Linux guests a central storage allocation beyond their wildest dreams 
(I'm kidding), and let VMRM handle the dirty work for us.  I could imagine 
that we could be a bit more relaxed about our vm.swappiness value then -- we 
still don't want each of our penguins to buffer up its disks, but perhaps the 
consequences aren't as severe when allocations are more fluid and more 
effective sharing is taking place[3].  Unfortunately I haven't used CMM in 
anger as I'm a little light on systems to play with nowadays.

Cheerio,
Vic Cross

[1] Swappiness controls the likelihood that a given page of memory will be 
retained as cache if the kernel needs memory -- it's a range from 100 (means 
cache pages are preserved and non-cache pages are swapped out to satisfy the 
request) to 0 (means cache pages are flushed to free memory to satisfy the 
request).
[2] If only to preserve the way that we used to tune our guests prior to 
2.6. :)
[3] We might even be able to do the Embedded Linux thing and disable swapping 
entirely!

-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Swap oddities

2007-10-29 Thread Vic Cross
On Tue, 30 Oct 2007 06:19:33 am Marcy Cortes wrote:
 I'm not sure it is working as designed.

I never said it was a good design -- and perhaps I should have read your
earlier messages prior to saying that. :)  It does depend on your point of
view though -- it's another one of these aspects that belies Linux's
single-system non-resource-sharing heritage.  In a non-shared environment,
keeping swap pages hanging around on disk is a good design point in that it
can realistically save costly I/O.  It's not so good for us though.  :)

 Eventually, when we use up our
 swap, WAS crashes OOM (that's *our* real issue, at least our biggest one
 anyway :).

Yes... and that's not going to be solved by CMM or creating different swap
VDISKs or anything like that.  The earlier hints about JVM heap size and
garbage collection and so on will be useful here.  I guess the application is
being checked for leaks as well -- or do your developers write perfect code
first-time-every-time too? ;-P

 But if we are able to swapoff/swapon and recover that space
 without crashing WAS that kind a says to me that it didn't need it
 anyway - course I haven't tried that whilst workload was running
 through...  Maybe it is destructive.

It might be, but as long as your Linux has more free virtual memory than the
amount of pages in use on the device you want to remove, you *should* be able
to do a swapoff without impact (things might get a little sluggish for a few
seconds while kswapd shuffles things around though).  It would be nice to be
able to tell accurately just how much swap space is being used on a
device -- /proc/meminfo is system-wide.  SwapCached in /proc/meminfo is a
helpful indicator that counts the swap space hanging around (you could try
http://www.linuxweblog.com/meminfo among heaps of other places for more info
about what the numbers from meminfo mean); if this number is low compared to
your total available swap then you're not likely to get much benefit from
swapoff/swapon cycles.

 We plan to experiment some with the vm.swapiness and see if that helps.
 I guess in the very least, we can add enough vdisks and enough VM paging
 packs to get through week without a recycle until we figure this out as
 long as response time  cpu savings remain this good with 6.1.

Good plan, although vm.swappiness is only likely to delay your swap usage
rather than eliminate it entirely (if something is asking for that much
memory, at some point it's going to have to get it from somewhere).  Of
course If it delays heavy swapping long enough to get you through the week
then that's a win.

While you've got this WAS issue you are *possibly* justified in throwing a
DASD swap device at the end of your line of VDISKs (I emphasise possibly
because I don't want to offend Rob et al too much).  Perhaps the last thing
you want would be to just keep adding VDISKs and VM page packs until your VM
paging system is consumed by leaked Linux memory.  You could do a nightly
swapoff/swapon of some of the VDISKs to flush things out and reduce the
activity to the DASD swap.  I guess what I'm saying is that you could think
about this WAS problem as an abberation rather than the normal operating mode
for your system -- don't jeopardise your entire environment for the sake of
one problem system, and be prepared to let best-practice slide a bit while
you get the issue sorted.  Of course you're in a much better position than me
to decide if your paging environment needs such protection.

I also transposed my client's problem onto your shop -- I thought you were
concerned about the number of pages allocated to VDISKs.  That's why I
mentioned the stuff about DELETE/DEFINE of your VDISK swaps.

Best of luck with the issue!

Cheerio,
Vic Cross

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Pros/Cons of FCP connection DASD

2007-04-06 Thread Vic Cross
On Fri, 2007-04-06 at 16:31 -0500, Marcy Cortes wrote:
 SAN is cheaper ... snip

 This one I don't quite get? If I'm using the same DS8000, why would it
 be cheaper?

I can think of one way...

If you want to do multipath I/O, for ECKD disk that's PAV and will cost
you extra for the PAV licensing (at least on IBM storage, but I can't
imagine the other vendors doing themselves out of money (: ).  For Open
Systems disk in the same box you can multipath using FCP without having
to pay for PAV.

This can be a factor even if you're not interested in multipath for your
Linux data.  If you need to have PAV for z/OS and your Linux data is in
the same box, using SAN for the Linux data will help you squeak into a
lower price bracket for your PAV license -- I'm told that PAV is
licensed on the *entire* ECKD capacity in the machine.

Using other disk vendors might be another way to make things cheaper,
but you'd likely run the risk of becoming unsupported.  And, as others
have said, what's cheaper or faster might not necessarily be better (for
various reasons).

Cheerio,
Vic Cross



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Philosophy: connecting to a Linux server

2007-04-03 Thread Vic Cross
On Mon, 2007-04-02 at 16:11 +0200, Rob van der Heij wrote:
 We did it slightly different with an experimental patch to OpenSSH
 that allows for the public keys to be kept in LDAP. That means there's
 only one place where the public key is held. That LDAP server would
 allow the end-user to upload a (new) public key through some
 authenticated interface. And the Linux servers can trust that LDAP to
 provide the right public key. The same LDAP also gives user and group
 information for Linux to allow login.

This OpenSSH patch[1] is (IMHO) in need of more airplay.  AFAIK Gentoo
is the only distro that includes it as part of their OpenSSH package (I
don't have SLES10 or RHEL5 nearby, they may have finally picked it up).
For shops using LDAP for authentication, it makes a lot of sense -- you
can have all your user detail in a sturdy LDAP directory, and using
appropriate filter configurations for nss_ldap and pam_ldap you can
still provide per-server access control[2].  The 'uploading' of the key
can be done with any of the LDAP administration tools, such as
phpldapadmin, LAM, or Luma -- the authorised key is just an ASCII text
field so cut-and-paste will work.

Another method that works is to share user home directories via NFS.
When a user logs on to a system their home directory is automounted,
which makes their ~/.ssh/ and consequently their authorised keys
available.

I used to keep my private key on a USB-key, but convenience (or the lack
thereof) was a barrier.  I'm wondering if some little mobile-phone app
that worked over IR or Bluetooth would be a substitute -- people
sometimes take more care of these. ;)

Cheerio,
Vic Cross

[1] Known as OpenSSH-LPK.  Used to be hosted by the OpenDarwin project,
but now seems to have new owners...  There's a Trac at
http://dev.inversepath.com/trac/openssh-lpk.
[2] pam_ldap provides it's own function for access checking based on an
attribute in LDAP, but we found it was better to use a filter in the
nss_base_* settings.  If you only use the pam_ldap setting, ALL
accounts in LDAP show up as accounts on the system even though the users
don't have access, which will confuse your auditors...  Better to filter
out the unauthorised accounts at the NSS level so they don't even show
up.


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Philosophy: connecting to a Linux server

2007-04-03 Thread Vic Cross
On Mon, 2007-04-02 at 16:11 +0200, Rob van der Heij wrote:

 We did it slightly different with an experimental patch to OpenSSH
 that allows for the public keys to be kept in LDAP. That means there's
 only one place where the public key is held. That LDAP server would
 allow the end-user to upload a (new) public key through some
 authenticated interface. And the Linux servers can trust that LDAP to
 provide the right public key. The same LDAP also gives user and group
 information for Linux to allow login.

This is an excellent patch that needs a lot more airplay.  I don't know
why it's never been picked up by mainstream distros; Gentoo is the
only one I know of that includes it (I don't have SLES10 and RHEL5
systems to check, though, they may have finally picked it up).

Combined with the pam_ldap and nss_ldap configuration options[1] that
allow you to restrict user accounts to a subset of all your hosts, you
can have all your users in a single LDAP but still provide access only
to certain hosts.

On the general handling of SSH keys however, the important thing to
remember is that the private key belongs to the USER (Rob and Adam have
both implied this in their posts).  Administering them centrally means
that there are sysadmins that (literally) have everyone's keys[2].  You
may find that some of your users would create their keys with a greater
strength than your default policy might provide, and you shouldn't
really tell your users that they have to make their key less safe than
they want to. :)

[1]ppaam_ldap


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SLES9 SP3 64 bit service question

2007-03-15 Thread Vic Cross
On Wed, 2007-03-14 at 10:42 -0500, Tom Duerbusch wrote:
 Thanks...that was interesting, but seemly useless G.  Have no idea what it 
 is trying to tell me.

Just because you don't understand what it's saying, doesn't mean it's
useless... :)

 Just for kicks, here is the results:

 linux40:~ # SPident -v -v -v

 Summary(using 549 packages)
 Product/ServicePack conflictmatch  update  (shipped)
 SLES-9-s390x  00%278 50.6%   0   (1555 17.9%)
 SLES-9-s390x-SP1  2  0.4% 63 11.5%   0(529 11.9%)
   - XFree86-Mesa  4.3.99.902-43.22  4.3.99.902-43.37
   - XFree86-Mesa-32bit  9-200407011411  9-200501052045
 SLES-9-s390x-SP2  2  0.3%127 23.1%   0(684 18.6%)
   - XFree86-Mesa  4.3.99.902-43.22  4.3.99.902-43.48
   - XFree86-Mesa-32bit  9-200407011411  9-200506070135
 SLES-9-s390x-SP3  00%271 49.4%   0(793 34.2%)

  Legend for Package Details:
   -  conflicting package (found  expected)
   +  updated package (found  expected)

 CONCLUSION: System is NOT up-to-date!
   foundSLES-9-s390x
   expected SLES-9-s390x-SP3

 linux40:~ #

It's telling you everything you need to know.  Your system has installed
versions of the XFree86-Mesa and XFree86-Mesa-32bit packages for which
both SP1 and SP2 had updates.  For some reason, these updates were not
applied.  As soon as you either update or remove those two packages,
things will be okay.

Cheerio,
Vic Cross



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: vnc on RHEL4

2007-03-07 Thread Vic Cross
On Wed, 2007-03-07 at 15:48 -0600, Roach, Dennis wrote:
 Does anyone have an idea on how to get vnc to work under RHEL4?
 I would like for all of the users to be able to come in on 1 port and use 
 their Linux ID and password.
 I can get it to use the ID and password, but not the same port.

There is a way to run VNC out of (x)inetd that does EXACTLY what you
want.  There's a number of HOWTOs on the 'Net, including:

http://linuxreviews.org/howtos/xvnc/
http://gentoo-wiki.com/HOWTO_Xvnc_terminal_server

The Gentoo one works for me because I'm a Gentoo freak. :)

The one caveat to this process is that you cannot disconnect your VNC
session and expect to be able to reconnect it (this is something you can
do with the single-Xvnc-server-per-user method); running through
(x)inetd you can no longer do this, as shutting down your vncviewer will
kill the Xvnc session.  You do get the benefit of only having Xvnc
servers running in support of real connected users, rather than having
an Xvnc running for all the users that might ever connect.

I feel compelled to make the standard disclaimer -- running VNC on
zSeries Linux is probably not the best use of your MIPS, and you are
likely to be disappointed in performance if you try and support a number
of users...  However, it's a *great* system to set up to show off how
Linux is Linux, even on the mainframe!  :)

Cheerio,
Vic Cross



--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: gpg 2.0 on VT?

2007-02-21 Thread Vic Cross

Tom Duerbusch wrote:

I'm back on the GPG 2.0.2 project.

When doing a gpg2 --gen-key, I get:

You need a Passphrase to protect your secret key.  ==note, it never
gave me a chance to respond


Gtk-WARNING **: cannot open display:


snip

cannot open display:sounds like I need to be on a x-terminal.
The program this is being sent from, seems to be pinentry-0.7.2 which
required gtk and glib (1.2.10) to be installed.


I have seen this happen with SSH key passphrases.  On some systems the
default SSH-ASKPASS pointed to is something like gnome-ssh-askpass,
which will want to open a GUI password prompt.  It shouldn't happen that
way if you don't have a DISPLAY environment, though...

I don't know how to get around this.  I think I should be able to
generate gpg keys with a VT100 type terminal.  Perhaps gpg 2.0 is a
gui-only product?


I believe that something is giving this pinentry program the impression
that there should be an X display available.  This would be the first
thing to check; perhaps you are connecting from a Windows machine using
PuTTY, and the session config is set to forward X, but you're not
running your X server right now?

Another alternative might be to see where the system nominates
pinentry as the program to run to receive passphrases.  The solution
might be to just switch to a different prompter, such as the one used by
ssh-askpass.  Or have a look at this fellow's experience, which I found
via Google, to see if it's relevant:

http://brondsema.net/blog/index.php/2007/02/06/keychain_gpg_agent_pinentry_problems

Cheerio,
Vic Cross


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Layer 2

2007-01-01 Thread Vic Cross

Lee Stewart wrote:

Is there a way to tell if Layer 2 support is enabled -- OUTSIDE OF YAST.
 Said another way, does anyone know what file(s) are updated when Layer
2 is enabled?


Lee, you can look in the /sys file system for the layer2 flag...

[EMAIL PROTECTED] ~]# cat /sys/bus/ccwgroup/devices/0.0.0f[01]0/layer2
0
1

0f00 is the Layer 3 NIC on this system, 0f10 is Layer 2.

Cheerio,
Vic Cross

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Booting from mirrored DASD

2006-12-05 Thread Vic Cross

Rob van der Heij wrote:
 Linux identifies the volumes by address rather than label

and John Summerfield replied:

RHEL does indeed use labels.


John, the labels you refer to are filesystem labels, not volume labels.
 The filesystem labels are not visible until *after* the DASD driver
has made all the disks present.  Rob is lamenting that the DASD driver
is unable to use VOLSER to map disk device to block node (i.e. we're not
able to do something like dasd=L0A201,L0A202,L0A203 on the kernel
command line).  Well, maybe lamenting is the wrong word... :)

Cheerio,
Vic Cross

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Detection of DASD volume linked read only

2006-10-29 Thread Vic Cross

Lee Stewart wrote:

Depends on when you want to check it...

After it's activated, look at /proc/dasd/devices
0.0.0200(ECKD) at ( 94: 0) is dasda   : active at blocksize:
4096, 600840 blocks, 2347 MB
0.0.0190(ECKD) at ( 94:24) is dasdg   (ro): active at blocksize:
4096, 19260 blocks, 75 MB

Note 190 (/dev/dasdg) above is flagged (ro) since it's a read only
device.   (This is SLES9).

In that case it's (ro) because the dasd driver has been told to activate
it so.  If the DASD driver has not been told RO, it will happily
activate it as RW and then fail dismally with any attempted writes (the
kernel errors Eric mentioned).

I think what Eric requires is a way to determine if VM has given it to
Linux RO, in order for Linux to activate it the right way...  The
hcp/vmcp methods mentioned will be the way to do this.

Cheerio,
Vic Cross


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: zLinux experience

2006-10-29 Thread Vic Cross

Mark Perry wrote:

Filesystems live in LVs and LVs in VGs which are made up from PVs(=Disks)

1) It is the LV that is striped and not the Filesystem.
2) A striped LV can be expanded providing it uses the same PVs in its
VG. (This implies you did not fully utilize the PVs to begin with of
course! and that is the real catch.)
3) A Filesystem can be dynamically extended providing there is room in
the LV.

Corollary to 2):  a striped LV can be extended if the space in the VG
into which the LV will be extended is carried by the same number of PVs
as stripes in the LV.

I think an example is required.  :)

A VG comprises 4 PVs.  An LV is created that occupies all the space in
the VG, with a stripe size of 4.  Expanding the LV would require
additional PVs to be added to the VG; the LV can be expanded if new PVs
are added to the VG in multiples of 4 (the stripe size of the LV).  If
only two new PVs are added, the existing stripe=4 LV cannot be expanded
into that new space, however a new LV with a stripe size of 2 could be
created.

I think the big issues with resizing striped LVs are captured somewhere
in the LVM doco...

Wrt performance gains from striping, you would need to be observant of
your stripe size to see that files did indeed fall across stripe
boundaries and were written across stripe members.  Lots of small files
might make this difficult and/or wasteful, but if you have large files
you'd be okay.

Cheerio,
Vic Cross


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Is OSPF limited on zLinux?

2006-10-19 Thread Vic Cross

Massimiliano Belardi wrote:

Guys,
I've a question. I've performed several installation of Quagga with VIPA 
using two DIFFERENT subnet for the real interface (eth0 and eth1) and another 
subnet for VIPA.

This is a good configuration -- maximum opportunity for redundant
pathways.  Depending on the routing setup outside Linux though, you
could have eth0 and eth1 in the same subnet without loss of function
(you would need to make sure that the neighbouring router was setup
redundantly using VRRP, HSRP or equivalent and your OSAs were attached
to different switches).

Why z/OS TCPIP can work with VIPA using two real interface on the same 
subnet???


I don't know of any reason why Quagga couldn't do this.  See above -- as
long as you ensured that with your interfaces in the same subnet you had
no single-point-of-failure, you'd be fine.  Was there a specific problem
you faced that forced you to configure your Linux OSAs in separate subnets?

What I hope you're *not* suggesting is that the z/OS VIPA is in the same
subnet as the interfaces.  While this will work, you are not really
providing an opportunity for OSPF to provide you with redundant pathways
to your VIPA [1].

What about Linux on Intel?


Quagga works the same no matter what platform it is built on (subject to
the capabilities of the network hardware of course).

Cheerio,
Vic Cross

[1] Digression: That configuration would possibly work better without
any OSPF at all, by just let the neighbouring router ARP to find your
VIPA.  I have not tested or even set up such a configuration, and I
believe it is still IBM's recommendation that VIPAs be in a separate
subnet advertised using RIP or OSPF...

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Server Time Protocol support for zSeries

2006-10-14 Thread Vic Cross

Rob van der Heij wrote:

There's a system TOD that is set at POR time (from the clock of the
PS/2 or so?) Unless you have the gear that will synch that from true
time, it will be off some amount.

So does this provide our cheap-as-chips solution?  Run an NTP client on
the SEs and HMCs?  ;-)

Cheerio,
Vic Cross

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: can LINUX/390 and z/OS LPARS share devices?

2006-09-06 Thread Vic Cross

Post, Mark K wrote:

Doesn't that require playing games with genning different subchannel
addresses to different LPARs, but using the same device numbers in those
different LPARs?


Not that I've been the bunny responsible for IOCDS, but as I recall it
was pretty simple and standard EMIF to map the same device address
numbers from each LPAR to different UNITADDR/subchannel.  I think it was
documented in the OSA Express Redbook/Redpaper and might even be
mentioned in the zSeries Connectivity Reference.

I once helped a hardware buddy do an IOCDS for EMIF CTCs, which was
significantly more mind-warping but along the same lines.  I think it's
just a case of how much complexity you put into the IOCDS to make it
easy for the network people to use the devices.

Alan Altmark wrote:

You *can* have the same device number in different LPARs, but sysprogs
who
do that are eventually found at the bottom of the nearest lake.


To quote a former politician from down this way, please explain?  I've
never found any need to over-complicate things by having different
device numbers for different LPARs (over-complicating things for z/OS
network people is a dangerous exercise IMO, which comes mainly from the
time when I was one :D ).  If they want to talk to the first OSA, they
put in E000 (e.g.) in the VTAM TRLE for TCPIP -- works for all LPARs
(also can be done across different CECs, which is great for DR).  Second
OSA is E100, and so on.  Of course, like I said, I wasn't doing the
IOCDS... :)

In fact, if my sysprogs *had* tried to push different device addresses
on me, I would have been looking for that lake you mention and telling
said sysprogs how fashionable concrete shoes are this season. :)

When you're sharing DASD, you don't have different device numbers to
refer to a pack from different LPARs, right...?  Why is it a bad idea
for OSAs?

Cheerio,
Vic Cross

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Missing /etc/motd

2006-07-21 Thread Vic Cross
Marcy Cortes wrote:
 Well, there is a Protocol 2 in there.  But that gives me a clue as to
 who to bug about it - we have a WF version of openssh that we run...
 I'll have to go check on a server without that!

Marcy, while you're checking sshd_config, have a look to see if the
following is there:

PrintMotd yes

:)

Cheers,
Vic Cross


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: CentOS DASD format

2006-07-21 Thread Vic Cross
Richard Pinion wrote:
 Have done the fdasd command?

fdasd comes *after* dasdfmt...

 [EMAIL PROTECTED] 7/21/2006 10:20:13 AM 
 I am attempting to install CentOS (Red Hat) v4.2 in a zSeries LPAR
 (standalone).  I IPLed the CD-ROM and I am at the point of havint to
 format the DASD.  I entered:

 Dasdfmt -b 4096 -d cdl -f /dev/dasda

Ummm..  I've done several CentOS installations, and I don't recall
having to manually format the DASD on any of them...  When you
SSH/Telnet into the installation system, you end up in the system
installer (Anaconda), which is pretty much a bouncing-ball thing,
including the setup of the DASD.

 It comes back with a message that the file is not available.  When I
 look in the /dev directory there is no dasda, although in looking in
 /sys I can find the file that describes the disk drive as well as the
 file that shows it online.

It's likely that the installer hasn't set up the device nodes for your
DASDs yet -- too early in the piece.

 Where does CentOS put the device descriptor and how do I address it in
 the 'dasdfmt' command?

If you *really* need to format the DASDs manually, you will need to
create the device nodes yourself.  Check to see that the proc filesystem
 is mounted, then issue cat /proc/dasd/devices to see what device
major/minors have been allocated to the DASDs.  Then,

mknod /dev/dasdan b maj min

with appropriate substitutions for an maj and min, for each of the
nodes you want, will get you started.

But you really should not need to do any of this.  Just follow the
installer's prompts and it should get you there.

Rob's comments are gold as well: ensure that the DASDs are online (cat
the appropriate online pseudofile in the sys filesystem directory you
found; a result of 1 is good) and that the DASD modules are loaded.

Cheers,
Vic Cross


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: CentOS DASD format

2006-07-21 Thread Vic Cross
Melancon, Ruddy wrote:
 Is there a way to restart the loader?

Not easily that I know of.  Re-IPL of your media will probably be the
easiest/quickest way.

I find that it can take me several IPLs to get used to the installation
process of an unfamiliar distro.  Time is well spent getting familiar
with the parmfile syntax, so that you can get to the point of SSH/Telnet
into the installer with few or no console commands having to be entered.
 In your Load from CD-ROM case you could experiment with writing your
own CD containing the installation files (kernel, parmfile, initrd) --
use a CD-RW while getting the parmfile right, unless you need a new set
of coasters!  :D

Cheers,
Vic Cross

PS: Hoping this is helpful...  It's *real* late (early?) over here and I
might be starting to ramble :)

--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Linux guest console via another guest?

2006-06-20 Thread Vic Cross

On 16/06/2006, at 1:02 pm, Ranga Nathan wrote:


Thank you all. I have always been curing sick guests by LINKing their
minidisks to other guests and mounting the partitions.
Occasionally I need to do something in the console.
I tried telnet via putty. Ok, 3270 is better. I am now resigned to it.


Ah, surely z/VM's virtualisation of the Integrated VT220 Console will
be out soon...  Then, we'll be able to use the Secure Shell SVM
(which also must be coming Real Soon Now) to SSH to the consoles of
our Penguins...

Cheers,
Vic (got to think of a cool name for *my* alter-ego, since Chuckie is
already taken)


--
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Exporting volume group to another guest - not working

2006-05-22 Thread Vic Cross

On 23/05/2006, at 7:59am, Ranga Nathan wrote:


It seems that LVM is expecting
same dasd identifiers such as /dev/dasdh1 etc for its PVs.


I and my colleagues have done this operation several times and never
experienced what you describe.  We have exported from a system with
less than ten DASDs and imported to a system with hundreds of DASDs
-- so the device nodes were very definitely different...

I've even moved a VG from system to system without doing the vgexport
(although I do concede that I was VERY lucky with that one).

Possibly you didn't do the vgscan after attaching the DASD to the
new system (and before you do vgimport)?  It should come back and
tell you that an exported VG was found.  You may also have trouble if
the VG you are importing has the same name as a VG that previously
existed on the system (but I would have thought vgscan would sort
that out).

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: upgrading to sles9

2006-05-04 Thread Vic Cross

On 03/05/2006, at 9:31 pm, Levy, Alan wrote:


Isn't the LVM information stored in the root filesystem ?


There is configuration data kept in /etc/lvm/, managed by LVM.  Part
of what the vgexport does is to remove that configuration data for
the VG being exported, so that the system knows not to expect that VG
to be present in the future.  vgimport does the opposite when the VG
is made accessible on the new system.

If you DDR the volumes, when vgscan is run on the new system it
should just say oh, well, a bunch of new LVM stuff, I'll use it.

Mark is right, most of the really critical data about LVM is kept in
the metadata on the PVs.  Using vgexport/vgimport is just, well,
nicer IMHO :).  Probably not a big issue in your situation, where
you're throwing away the old server, but for moving VGs between
persistent systems it's pretty much essential.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Google out of capacity?

2006-05-04 Thread Vic Cross

On 05/05/2006, at 5:53am, Fargusson.Alan wrote:


A long time ago I read that they did TCO studies, and found it less
costly to buy lots of low cost hardware over buying fewer high cost
systems.


A long time ago is the point.  When I read similar, the server
count was around 8000 -- it would seem that they've grown
considerably beyond that now.  I doubt they've updated their TCO
analysis accordingly...  :)

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: upgrading to sles9

2006-05-03 Thread Vic Cross

G'day Alan,

On 03/05/2006, at 1:55am, Levy, Alan wrote:


I would like to create a new sles9 server and then do a conversion but
am not sure about the best way of accomplishing this. Is the
following feasible?

1. Get 29 new mod-9 packs and cp format them
2. Create a new userid with 29 packs in the VM directory.
3. Clone an exising sles9 server (ddr existing server to first new
pack)
4. Bring up server  configure it (different IP address than
production,
different OSA addr, etc)
5. Activate the 28 new packs with yast.
6. Create lvm with the 28 packs
7. Shut down the new server and DDR the production 28 packs to the new
server.
8. Install software needed for the new server.
9. Bring up server and test
10. Shut down production server  change ip  osa addresses of new
server
11. Bring up new server as the new production server.


Check out the vgexport/vgimport process.  This would eliminate 28
DDRs by allowing you to logically detach the LVM VG from your
existing production system and attach it to the new system:

1. Clone an existing SLES 9 server, configure it, install software
2. Shut down application on current production server
3. umount /usr/local
4. vgexport system
5. Shut down current production server
6. Attach 28 LVM volumes to the new guest
7. vgscan
8. vgimport system
9. vgchange -ay
10. mount /usr/local
11. Start application on the new server

I can see two issues with this approach:
1. Not knowing what's under /usr/local, if your application is
installed there you'll be a bit stuck (this is one reason why most
admins *never* install apps and data in the same filesystem).  There
are many ways to get out of this though.
2. You'll be bringing across the old LVM1 metadata from SLES8,
instead of picking up a shiny new LVM2 configuration.  Should not be
an issue though as SLES 9 can talk LVM1, and I think you can update
LVM metadata without having to recreate anything.

It works for me, but see if it fits your needs.  I do it quite a lot
to minimise outage times for data relocations and new filesystem
creations.  With good preparation I imagine you could minimise your
downtime to little more than the duration of the application restart.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: simple hipersocket communication between LPARS, pls help

2006-04-26 Thread Vic Cross

G'day Anna,

On 26/04/2006, at 5:43 pm, Fuhrmann Anna wrote:


I simply don't know how to figure it out.


Well, we're here to help :)


Two partitions (z/os and RHEL4) involved. want to communicate.

One interface is OSA Express, working fine, VIPA- and
omproute-configured.


Is this the z/OS system that has VIPA and OSPF, or both z/OS and Linux?


The other one should be hipersocket, no need for VIPA and for dynamic
routing, as far as I see.

What I don't quite see at the moment: how do I *prevent* the z/os-
LPAR
from choosing the usual way
(of being routed): is it by defining a static route for the
hipersocket
interface in the Profile-dataset? BSDROUTINGPARMS or BEGINROUTES or
whatever?


Defining a static route is one way.  You need to take care to ensure
that the static route is not imported into your OSPF domain and
exported to the rest of the network via OMPROUTE, or you may find
your z/OS system becoming a router for your Linux system...


AND: how and where do I do the corresponding thing for the Linux LPAR?

/etc/sysconfig/network-scripts/ifcfg-hsi0 is defined as follows -
and I
don't know if
these definitions are correct especially as to if it is correct when
HWADDR is empty.

DEVICE=hsi0
HWADDR=00:00:00:00:00:00
BOOTPROTO=static
IPADDR=192.168.60.4
NETMASK=255.255.255.192
NETTYPE=qeth
ONBOOT=yes
SUBCHANNELS=0.0.0a08,0.0.0a09,0.0.0a0a
TYPE=Ethernet


Don't worry about HWADDR, it's used on other platforms to distinguish
multiple network interfaces of the same hardware type.

If you choose to use static routing, you will need to create a file
called route-hsi0 that contains the detail of the route you wish to
create.  The format will be:

vipa-address of z/OS via HSI interface of z/OS

This will ensure that any traffic directed to the VIPA of z/OS goes
via HiperSockets.

If you had zebra or quagga set up on Linux to provide VIPA there, you
could define the HiperSockets to OMPROUTE and to zebra/quagga and let
OSPF handle the definition of appropriate routing entries.  In this
case, more-so than static routing above, you will need to take even
more care to ensure that the HiperSockets network is not visible to
the exterior network (LAN) unless it's REALLY what you want.

Be aware that all of this needs to be done with involvement from the
network/router people at your shop.  Creating internal networks
between systems can create a routing loop, and this can be very bad
for network operation (and your chances of ever getting the network
people to do you a favour in the future).

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: AW: simple hipersocket communication between LPARS, pls help

2006-04-26 Thread Vic Cross

On 26/04/2006, at 9:51 pm, Fuhrmann Anna wrote:

(in reply to me saying)

file called route-hsi0


to be created also in /etc/sysconfig/network-scripts ?


Yes, sorry, I distracted myself checking the answer and forgot to
give it! :)

I also forgot to mention that if you do nothing else other than add
the route to Linux, this will work bi-directionally (i.e. traffic
will be sent and received over the HiperSockets) *only* if the
connections are initiated by Linux, and if you are not using source
VIPA on Linux (which you have said you are not).  If connections are
being initiated from z/OS, you will need to add a static route for
Linux's IP address to z/OS's routing table otherwise its outbound
traffic will go over the OSA.


This will ensure that any traffic directed to the VIPA of z/OS
goes via HiperSockets.


This is fine, I can do that in any case.

Is it also necessary if the applications (we plan to implement) use
the IP-address the HSI interface of z/os directly, and not the VIPA-
Address of z/OS? So that every conversation
from-Linux-to-z/os and back explicitly uses the HSI-Address?


You're right, if you do it that way you will avoid the need to code a
route to the z/OS VIPA on the hsi0 interface (or to Linux's IP in z/
OS TCPIP).  That is indeed one way to connect directly via the
HiperSockets (and exactly what's working for Steve as suggested in
his note).  Adding the static routes via HiperSockets gives you the
benefit of not having to make any application changes in order for
*all* the Linux-to-z/OS traffic to be sent over HiperSockets -- if
that is not what you want, and it's fair enough that you might only
want to send certain traffic over HiperSockets, then addressing to
the interface gives you control over what traffic will use HiperSockets.

IMO, it's probably the only situation where it's okay to use an
interface address instead of a VIPA -- if your HiperSockets isn't
available you're likely having bigger problems with your system than
whether two LPARs can talk to each other!  :D

It does, however, create a temptation to use one of my pet peeves:
hard-coded IP addresses.  What happens if you need to change the IP
subnet allocated to your HiperSockets?  Your applications may all
have to change.  Make sure to define the interface IP addresses to
DNS to minimise the number of changes you might have to make in the
future[1].  Then give the apps people the name rather than the IP
address -- it's easier for you to change (or arrange to have changed)
one DNS entry than for several application people to make possibly
dozens of application changes...


Thanks so much ...


You're welcome!  Hope we've been helpful.

Cheers,
Vic Cross

[1] Network folks often have several DNS entries for a single piece
of network kit, each different name referring to a different
interface (it's easy to come up with a usable naming convention too,
like the hostname zephyr would have zephyr-eth0, zephyr-hsi0, and so
on).

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SuSE 2.2.16 kernel to test

2006-04-19 Thread Vic Cross

Loren, I'm pretty sure that with a 2.2.16 kernel it's the SUSE 7
system that actually predated SLES 7 in the s390 world -- might even
be the 6.4 beta, eek!  :)

On 20/04/2006, at 3:13am, Hall, Ken (GTI) wrote:


Pre-SLES8 systems used /etc/rc.config to configure everything
except routing information.

The three files you need to update are:

/etc/rc.config
/etc/route.conf
/etc/hosts

There are a couple of others that have to change if the host name
changes, but if only the IP address and gateway are changing, these
are the ones.


In those early releases, changes to /etc/rc.config don't take effect
until after you run SuSEconfig.  I think that SuSEconfig also
manages /etc/hosts (but maybe that's just on later releases), so I'd
suggest making what changes need to be made in /etc/rc.config, run
SuSEconfig, and see what else needs to change.

Make sure that *somewhere* you update your default gateway address.
I think that's elsewhere in /etc/rc.config, but might be in /etc/
route.conf.

You can use the mount-the-copied-disks-to-a-running-system method,
but you will need to mount all the copied system's disks -- at the
right mount points -- and chroot into the copied system in order to
run SuSEconfig.  Example:

mount /dev/dasdk1 /mnt/copy
mount /dev/dasdl1 /mnt/copy/usr
mount /dev/dasdm1 /mnt/copy/var
chroot /mnt/copy# -- all file accesses are now from the copied
system's disks
vi /etc/rc.config
SuSEconfig
exit# -- return to your running system
umount /mnt/copy/usr /mnt/copy/var /mnt/copy

Replace the DASD names with the actual names you get on your system,
of course, and don't forget to detach the disks from your running
system when you're done.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Booting into single mode

2006-04-09 Thread Vic Cross

I'm still catching up with messages, sorry...

On 05/04/2006, at 2:27am, David Booher wrote:

Your second suggestion is what I do normally, but now that some of  
these

new zLinux systems are installing with LVM, this is making my life
miserable when it comes to mounting on a rescue system and fixing a
problem.


Instead of linking the dead Linux's disks to a working Linux,  
thereby running the risk of not linking a disk that was needed to  
bring up an LVM, I have a one-pack Linux rescue system that all my  
Linux guests can link to.  It's DASD driver is set up to try and  
bring online all the common DASD ranges we use.  My rescue process is  
to link the one-pack disk from my dead Linux and IPL it instead of  
the usual IPL device.  LVM still creates some fun, but I build the  
one-pack system without any dependency on LVM so that there's no  
conflict with LVM filesystems of the system-to-be-rescued.


This approach works for us since we have a few guests that have  
literally hundreds of DASDs, and doing filesystem recovery by linking  
all those disks to another system would be... difficult.  It also  
makes the security folk sleep a little easier knowing that I don't  
have a back-door überguest who is allowed to link to everyone's disks.


Note that this is very similar to IPLing your dead Linux from the  
installation files.  Having a pre-built rescue system though allows  
you to have some certain things set up (like IP configuration, backup  
clients, etc) that might help you in your recovery task.


Cheers,
Vic Cross
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: cygwin x11

2006-02-21 Thread Vic Cross

Bruce Gui wrote:


yes, the local system is Windows, and I use PuTTY to connect to remote
Linux via SSH. X forwarding is enabled.



X forwarding may be disabled on the host to which you are connecting...


but when I enter xterm , the remote host display:

Xlib: connection to mylocalIPaddr:0.0 refused by server
Xlib: No protocol specified
xterm Xt error: Can't open display: mylocalIPaddr:0.0xterm .

if i modify $DISPLAY to localhost:0.0, then xterm Xt error: Can't open
display: localhost:0.0 appears when i issue xterm



If X11Forwarding is working correctly through SSH, you should need to
make no changes at all to DISPLAY.  This is part of what SSH does for
you.  You can verify this by:

 echo $DISPLAY
localhost:10.0

The number may be greater than 10, but is definitely not 0...


I remember there should be a listening port 6010 on the remote host, and
DISPLAY=localhost:0.0, that's to say: the local cygwin is a X11-server,
the remote host is a X11-client, remote host connect its port 6010 to
local host's port 6000, so
the display on the remote host will be redirected to local host.



If there is not a listening port on the remote host that corresponds
with your SSH session, then the server is not setting up its end of the
X forwarding connection: either it is rejecting your SSH client's
request because of configuration (sshd_config does not say
X11Forwarding yes, since the default is no) or some other problem
exists at the server.

X display ports start at TCP port 6000.  By default SSH starts
allocating display numbers for forwarding starting at 10, which is why
you see port 6010 tunelled through to your SSH client if your DISPLAY
variable says :10.0.  Because you may not be the first person to tunnel
X via SSH on that host, you must always let SSH handle it for you (if
you are the second, for example, you will get display number 11 and TCP
6011 will be tunnelled to you).

If a TCP X display port is open against your sshd process but your
DISPLAY variable doesn't correspond, check your shell profile scripts
(/etc/profile, ~/.bash_profile, etc) for a command that sets DISPLAY.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Questions regarding zipl.conf

2006-02-12 Thread Vic Cross

Bernard Wu wrote:


The adresses are real addresses.  Also, I am using a mixture of both mini
disks and dedicates.  The LVM's ( 2500-2504 ) are dedicated and the others
are mini-disks.

DASD 2061 CP SYSTEM VMD001   2
 .
 .
DASD 2500 ATTACHED TO LNXCAPD  2500 R/W 0X2500
DASD 2501 ATTACHED TO LNXCAPD  2501 R/W 0X2501
DASD 2502 ATTACHED TO LNXCAPD  2502 R/W 0X2502
DASD 2503 ATTACHED TO LNXCAPD  2503 R/W 0X2503
DASD 2504 ATTACHED TO LNXCAPD  2504 R/W 0X2504



Linux does not need to know about the real device numbers -- it sees
only the virtual device numbers.  The fact that you've DEDICATEd the
DASDs with the same virtual device number as it's real device number is
probably helping to confuse you. ;)

In the directory for your LNXCAPD guest, the minidisk you have defined
on real DASD 2061 will have a virtual device number.  This is the number
that must appear in the dasd= line.  It does not need to be the same
number as the real device number of the host DASD (and should not really
be: after all, the minidisk can reside on any suitable DASD).  As Mike
and Carsten both indicated, the order of numbers in the dasd= line
will determine which DASD shows up as dasda, dasdb, dasdc, etc.

rant type=minor
If the installer had added the dasd parameter line it used to the
zipl.conf it created, you might have had some help toward knowing what
was needed.
The dasd parameter missing from zipl.conf has caught me out on a
couple of occasions.  After install the system boots fine (the installer
having built the initrd/boottext correctly), but the first time you run
zipl (say, for a kernel update) stops the machine from booting.  When
the dasd parameter is not provided the driver senses all DASD, and when
you use the de-facto standard of Linux disks starting at 200 your CMS
disks (191 and friends) pop in ahead and b0rk up your DASD naming...
/rant

If you have moved part of your system (/usr, /var, etc) into the LVM,
you will need to run mkinitrd to ensure that the LVM activation commands
are added to the initrd.  This will make sure that all required
filesystems are available for the boot to complete.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Is SPident broken ?

2006-02-06 Thread Vic Cross

G'day Bernie,

On 07/02/2006, at 6:53am, Bernard Wu wrote:


With SLES9 +SP3, I get :

CONCLUSION: System is NOT up-to-date!
  foundSLES-9-s390x
  expected SLES-9-s390x-SP3

The same minimal install produces 2 different results.

Is SPident -vvv broken ?


There was discussion here about this shortly after SP3 came out.
Basically, the SPident package shipped on the SP3 CDs was in fact
broken.  An updated version was shipped via Online Update when the
problem was found.

Cheers,
Vic

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Move an LVM

2005-12-15 Thread Vic Cross
On Thu, Dec 15, 2005 at 05:07:12PM -0500, Mark Post wrote:
 It should be doable, but going from an LVM-1 system to an LVM-2 system has
 sometimes caused issues for people in the Intel world at least.  Make sure
 you have a good backup.

Good tip.  Although I've done this a few times, it's not (to my memory)
been between LVM-1 and LVM-2.

 A lot of the LVM information is written as metadata on the physical volumes
 (PVs) themselves.

This is true; the rest is configuration information in your filesystem
(usually under /etc/lvm/).  The vgexport/vgimport process takes care of
this for you.  Files under /etc/lvm keep track of which VGs and LVs are
expected to be present when LVM starts -- if you simply remove the
DASDs, you're likely to get errors when you next run a vgscan/vgchange.

vgexport also changes some of the metadata on the volumes so that if
somebody happens to do a vgscan and/or vgchange before you've detached
the DASD, you won't get the VG unexpectedly reappearing on your system.
These changes are undone by the vgimport on the destination system.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: VM Mdisk cache and Linux disk caching

2005-12-15 Thread Vic Cross
On Tue, Dec 13, 2005 at 11:19:23AM +0100, Carsten Otte wrote:

 I second your thoughts regarding block vs track caching,
 but I doubt that a scenario exists where MDC for non-shared
 mdisks outperforms reasonable distribution of the available
 storage to the linux images. Would you care to show such
 measurement?

It's likely that there is not one. :)  The key point you've stated (and
what I believe you've missed in Barton's message) is the reasonable
distribution of the available storage.  Basically, what is reasonable
for one set of guests may not be the most effective for another set.

Barton wrote:
  Another part of this consideration, with xx servers, they
  are not all likely to be active at one time. So we need a
  way to move the cache storage from the idle servers to the
  active ones. If linux owns the cache, this is not possible.
  We control the linux server cache size
  by minimizing the linux servers, so that there is less room for
  cache.

and further:
  And there is no reason to believe that allowing disks
  that are only used by one server to be mdc cached is a
  bad idea. How else can you give a server a couple hundred
  MB of cache dynamically when it needs it?

For active and inactive here we are talking about DASD activity, not
whether or not the guest is logged on to VM.  In the case where you are
tuning your storage sizes to cache in Linux, to give acceptable
performance you have to size for peak load, which results in *all* of your
guests being allocated storage to use as cache.  Let's say 10 guests
with 20MB each as cache.  You've made a decision to burn 200MB of
storage across the environment as cache -- but it's in small bites that
*none* of the guests can make total use of in a burst of activity.

Allocating cache centrally (a la MDC) makes that total amount of cache
available to *any* of the guests that wish to use it in any amount.  A
system that needs a stack of cache for a time can borrow it from other
guests that aren't needing it during that interval.

To me it's another example of trading-off between getting absolute maximum
performance and the greatest flexibility in the environment.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Q.: VIPA implementation

2005-10-23 Thread Vic Cross

Miguel Veliz wrote:


I read several threads on this and apparently there is a need for a
routing daemon to be running.
Why is there the need for a routing daemon?



A routing daemon is needed to tell the other parts of your network how
to reach your VIPA(s).  Ideally, for maximum availability, your VIPAs
should not be in the same subnet as any of your physical interfaces (and
your physical interfaces should not be in the same subnet, but this is
becoming less important).  Because your VIPAs are in a different subnet,
your neighbouring router cannot reach your VIPA directly any more -- it
needs an entry in its routing table to know how to reach the VIPAs.


What exactly should this daemon do?. Re-route incoming packets to the
physical interface that is up when the other one is down dynamically.



Because you have multiple physical interfaces through which traffic can
reach your VIPAs, and you want failures to be routed-around
automatically, the routing daemon tells neighbouring routers how to
reach your VIPAs.  If one of your physical interfaces (or one of the
neighbouring routers) fails, the network will learn that that path is no
longer available and your VIPA stays contactable automatically.

I believe this to be key in a VIPA implementation.  Without a dynamic
routing daemon, you would have to recover manually (by recoding static
routes, etc).  IMHO, you have lost much of the value VIPA offers.

A dynamic routing daemon is not involved in the actual transmission of
packets.  All it does is provide information to neighbouring routers
about the networks you can reach, and provide information to your own
kernel regarding the networks that other routers can reach.


What is IPA takeover is used for?.



IPA takeover is the capability for another host in your environment to
take on the VIPA addresses of a primary host should that primary host
become unavailable.  It lets you implement a kind-of hot-standby
system where multiple servers running an application are available, but
only one (the one where the VIPA is active) is receiving the traffic.
If that host fails, IPA takeover allows the IP address(es) to be moved
to one of the standby hosts.


Is IPa for a manual take over procedure?



I'm not sure about this -- I would hope it was automatic.  If it is
manual, I can think of a better way to do the same effect in an
automatic way :)


Do I need to set IPA takeover along with VIPA



No, IPA takeover is not required for your VIPA to work.


And last , any idea why I am not able to echo 1 or  to the enable
folder. I issue the command:
echo 1  /sys/bus/ccwgroup/drivers/qeth/0.0.2210/ipa_takeover/enable
The is no error shown or logged, but the set never happens. I do a cat
to enable and it comes back 0.



This probably means that some other configuration is required (like
setting the IP addresses that you are standby for).  I suspect you will
need to plug IP addresses into the add4 or add6 pseudofiles in
ipa_takeover to set it up.  Otherwise, there are some driver parameters
that can only be changed while the device is offline (fake_ll is an
example).

Remeber that you only need to do this if you want IPA takeover function
-- that is, you have another host that you want to service your VIPAs in
the event of the failure of your main host.  In that case, the
manipulation of the ipa_takeover nodes will happen on your backup hosts,
not on the main host.

I'd suggest referring to the Linux Device Drivers manual on
DeveloperWorks (taking my own advice I'm doing the same, since it's been
a while since I looked at some of this).

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Samba - multiple connections from a Windows box

2005-08-10 Thread Vic Cross
On Mon, Aug 08, 2005 at 10:33:57AM -0400, Jon Brock wrote:
 We are indeed using different userids and passwords for the different shares

Have you looked at Samba ACL support?  ACLs (Access Control Lists) allow
you to create a more complex security environment than can be created
using simple Unix user/group/world security.  Using a single userid
from the Windows box then, you can control which resources on the server
each userid has access to.

It might work like this: each of your users belongs to a group that
identifies their department, or security clearance, or whatever category
you wish to apply.  In your filesystem, you add an ACL to your files or
directories for each of the groups that is allowed to access the file,
specifying what level of access they are allowed.

While I don't have a complete picture of your whole security
requirement, I think that ACL support should be a help to you in this
case...

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: IP weirdness

2005-08-06 Thread Vic Cross
On Saturday 06 August 2005 04:38, Neale Ferguson wrote:
NF I have a guest LAN that connects a number of guests.
snip
NF All but one of the guests can talk to each other, except one
NF (I'll call X) which can only talk to the default gateway. If I ping the
NF others from X I get no response. The same goes when I go from the any
NF other of the guests except R to X.
snip
NF We are z/VM 4.4 0404. Guest X is SLES9 SP2, Guest R is SLES7, other
NF guests are Debian.

There are a few VMLAN fixes out since 0404RSU.  From recent experience I'd
suggest you get right up to date and try again (the qeth driver on the guests
other than X might need an update as well, although it strikes me as odd that
the SLES 7 system works talking to the SLES 9 SP2 given that its qeth is
probably older than the Debians'...).

The z/VM team maintains an excellent resource that talks about (among other
things) z/VM maintenance levels in support of virtual networking:

http://www.vm.ibm.com/virtualnetwork

Look for the Virtual Switch and Guest LAN CP Maintenance levels link.

NF The other weirdness I see is that all the guests have a subnet mask of
NF 255.255.255.192 but Q LAN DETAILS reports everyone as 255.255.255.0.

No need to be concerned about this.  At earlier levels the display shows the
natural network mask (i.e. as if you weren't subnetting) instead of the
actual subnet mask in use.  On z/VM 5.1 the subnet mask display is removed
(replaced by the MAC address, like the multicast display).

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Hipersockets and VLANs

2005-06-30 Thread Vic Cross
On Tue, Jun 28, 2005 at 04:48:17PM -0400, David Kreuter wrote:

 Is this possible:
 10.1.1.x host in Real Switch; Real Switch turnk port via OSA to Vswitch
 in LPAR A with linux host 10.1.1.19; same linux host has a hipersocket on
 1.2.3.4 with VLAN enabled connecting to a linux host in LPAR B at 1.2.3.5
 with VLAN enabled. Linux host on LPAR A with VLAN enabled configures
 10.1.1.29; linux host on LPAR B with VLAN enabled configures 10.1.1.39.
 So now a packet can journey from 10.1.1.39 to real host 10.1.1.19 over
 hipersocket over LPARa's OSA??

I read this wrong first up.  I'm pretty sure now that you just want to get
from A to B via the Hipersockets rather than the OSAs.  That's not too
difficult actually; it's a SMONR[1], and after that a SMOR[2]. :)

Whenever you're dealing with multi-homed hosts, you have the problem of
getting the kernel to choose the right interface for the job (this is a
problem not unique to Linux, or mainframes).  You know Linux on LPAR A
by 10.1.1.19; from Linux on LPAR B, if you use that address you go over the
OSA -- not what you prefer.  You need to use 1.2.3.4 as a destination, but
that's not the address that's stored in DNS for Linux on LPAR A.

Your proposal of trying to get the Hipersockets to look like part of the
LAN is fraught with danger.  Don't look at doing that.  You would need to
look at the Linux Ethernet bridging code, and quite apart from whether it
is actually possible to bridge an Ethernet to a Hipersockets, there is massive
potential to cause real problems on the LAN environment.

The better approach is just to use the right destination IP address.  The
easiest way to do that would be to add each of the Linux guests to each others
/etc/hosts file, using the Hipersockets IP address.  If you've got a number
of hosts attached to the Hipersockets though, this might get messy; a
different approach would be to define a second hostname in DNS for each system.
Clients on the LAN say ssh lparA-linux, you (from Linux on LPAR B) say ssh
lparA-linux-hipers (for example).

For the really adventurous, you could set up your own DNS in the mainframe
Linux environment.  That way you could have your own IP address mappings.
This only works well if your Linux systems are in their own DNS zone (because
you don't want to have to keep your own copy of everything else in the same
DNS zone).  There are other DNS tricks that can be done though; BIND 9 has
ways to provide different information depending on where the request came
from, which would be a creative way of addressing the issue.

VIPA can help the name resolution problem too.  By using a virtual address
as a destination, the best way to reach that address from any given host is
determined by routing.  This works best when a dynamic routing protocol is
used, so that everyone learns the best path to the virtual address from
wherever they are.  Unfortunately the amount of cycles that our typical
routing daemons consume makes them an unattractive proposition for large
installations.

 I know as an alternate I can do this by connecting both using a shared OSA
 port on LPAR A and LPAR B - but - I need all the justification I can muster
 for a hipersocket solution -

A shared OSA certainly would relieve you of the multihoming issues that using
a Hipersockets creates.  It obviously doesn't give you all of the Hipersockets
goodness though, and the issues of multihoming really are not so significant
that you should have to give up on using Hipersockets.

 Don't ask - network planner at client site does not want to introduce new
 subnets or networks - vswitching into z/VM LPARS is fine -

Then just be vewwy vewwy qwiet...  :)  You could use an arbitrary IP subnet
(like an RFC1918 private range -- if your company is using 10/8, then pick
something from 172.16/12 or 192.168/16) and don't try and advertise it or use
it as a backup routing path or anything else.  The systems attached to the
Hipersockets will be the only systems that ever need to know about it.

Of course I'd never *really* advocate doing such a thing, but it could be a
way to get an official network from the planners quick-smart ;)

Cheers,
Vic Cross

[1] Simple Matter Of Name Resolution
[2] Simple Matter Of Routing

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Question for those C types out there

2005-06-14 Thread Vic Cross
On Mon, Jun 13, 2005 at 02:21:56PM -0500, Tom Duerbusch wrote:
 Yep, I do have the 'config.log'...thanks

 I took one look at it and saw the #define stuff and decided that was
 not a log file but I thought it may have been a temp file for some c
 code.

Tom,

That's part of how configure scripts work.  Some capabilities can be tested
by checking to see if a file is present or the like.  Other capabilities can
only be tested by compiling a bit of code and seeing what happens when you run
it (or seeing what error you get from the compiler when you compile it).

The code you see in the config.log is the small code snippets that are
compiled and (usually) run as part of the configure process.  So you weren't
seeing things!  :)  The config.log file is very thorough, so just to pick up
the message output you could redirect (or use tee), or use script, as
already suggested.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Sharing OSA GbE Ports

2005-06-01 Thread Vic Cross
On Tue, May 31, 2005 at 08:06:43AM -0700, Mark T. Regan, K8MTR wrote:
 We are looking at setting up a Linux complex on z900 that will have both 
 Internet connections and
 internal only connections. Can a OSA port be shared between Linux hosts that 
 are in unsecured and
 secured environments without compromising security integrity?

In a word, no...  How do you plan to have a single physical piece of cable
connecting to both a secure network and an unsecured network (please, don't
say by overlaying the IP subnets... :) )?

If your answer includes VLAN then that's different, and it becomes possible
since your Internet VLAN and your internal VLAN are logically separated.

 I.e. would someone be able to come in through the Internet side of the
 shared port and some how cross-over to the secured host that
 is sharing the same port, but in a differnet subnet?

If you are using VLAN, access between the VLANs is a question of IP routing.
If there is no system that has a connection to both VLANs, then you're okay.

If there is a system in both VLANs, it becomes a potential path between the
networks.  This has nothing to do with sharing an OSA, mind -- it's simple IP
routing (even if the multi-homed system has IP forwarding turned off -- from
the Internet, someone could log in to that system and then have connectivity
to internal hosts...  From there, the ability to connect directly from the
Internet to internal hosts is only an SSH-dynamic-port-forward away...).

 We are hoping that we don't have to dedicate a port to the Internet
 connected Linux host. Otherwise we may have to purchase additional OSA
 cards.

To do this safely, your solution must include VLAN.  Overlaying IP subnets
would be nothing more than security-by-obscurity.

Hope this helps...  Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: How to share TCPIP and VSwitch from level 1 to level 2

2005-05-09 Thread Vic Cross
On Mon, May 09, 2005 at 04:14:12PM -0700, Ranga Nathan wrote:
 Is there a way to run just one TCPIP stack and one VSWITCH for both level
 1 and level 2 z/VM?

If what you want to achieve is to have the 1st and 2nd level guests in the
same IP subnet, you could do it a few ways...

1) At the first level directory, add one NICDEF statement for each of the
  guests in your second level z/VM to the directory entry for the 2nd level
  z/VM.  The NICDEFs will attach the NICs to your VSWITCH.  Inside the 2nd level
  system the virtual NICs will appear as a lot of OSAs -- DEDICATE the triplet
  for each of them to one of your 2nd level guests.
2) At the first level directory, add one NICDEF statement with DEVICES as large
  as you can make it to the directory entry for your second level z/VM.  The
  NICDEF will attache the NIC to your VSWITCH.  Inside the 2nd level system you
  would see a single OSA, but you DEDICATE a triplet to each of your 2nd level
  guests thereby sharing the virtual OSA between all your 2nd level guests.

Those two both use the VSWITCH defined at the 1st level, but there are a couple
of ways to get the end result by using a VSWITCH inside 2nd-level:

3) At the first level directory, add one NICDEF statement to the directory
  entry for your second level z/VM.  Inside the 2nd level system, create a
  VSWITCH that connects to the virtual OSA.  Connect all your 2nd level guests
  to the 2nd level VSWITCH.
4) At the first level directory, DEDICATE a triplet from the real OSA through
  to the 2nd level system.  Inside the 2nd level system, create a VSWITCH that
  connects to the real OSA.  Connect all your 2nd level guests to the 2nd
  level VSWITCH.

Note that 1) is pretty much guaranteed to work, but is the most resource-
-intensive (system and sys-prog) method.  2), 3) and 4) I've never done, but
should work just fine.

Of course if your secondary objective is to not run a TCPIP stack at all in 2nd
level, disregard 3) and 4) (the 2nd-level VSWITCH obviously requires it).  In
this case, I'm sure you already realise that you'll only be able to reach
consoles at 2nd level by DIALling from 1st level... :)

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: xinetd - how to figure out from which interface a connection is coming

2005-05-02 Thread Vic Cross
On Sun, May 01, 2005 at 04:24:02PM -0300, Ulisses Penna wrote:
 I realy need the local address. The xinetd is listening to all
 possible local interfaces: eth0:1 and eth0:2 and eth0:3 and so on.

It's a bit more work on the xinetd side, but you can set up xinetd to bind on
each interface separately, then modify the invocation of your shell script so
that the interface name is passed to it as part of the command line.  A bit of
an ugly hack, but it would certainly do the job for you.

Check out the bind parameter (or its synonym interface) in the man page
for xinetd.conf.

 Is there a clue/tip at the /proc/${PID} or something that can tell me?
 Or other location at /proc? Or some environment variable? Do I have
 to write a C app to find out the local IP?

Perhaps xinetd might set an environment variable that you can use?  Would be
easier than copying a xinetd configuration dozens (hundreds?) of times...

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: src-vipa

2005-04-19 Thread Vic Cross
On Tue, Apr 12, 2005 at 09:20:21AM -0400, [EMAIL PROTECTED] wrote:
 I'm trying to use src_vipa and use it system--wide.   I've placed
 src_vipa.so in ld.so.preload.   Seems to be working OK.   BUT if I do a
 ls, I end up with a session that seems to be hung.   Can't load a new
 bash subshell either.When I use Cntrl-C, I get the bash shell prompt
 (as if I was a single user startup).   Any way to fix this?   We're
 running SLES 8.

How did you determine that it seemed to be working okay?

I can think of two possibilities for the problem you're seeing:

1. The system cannot locate src_vipa.so in LIBPATH -- better to specify the
   full pathname to the preloaded library.  Having been caught by something
   like this before, the only way to fix it (if you've rebooted and can't get
   a working command prompt) is to boot off an install or rescue system,
   mount up the disks, and fix the problem.

2. Your system is set up to get auth data from LDAP, and the LDAP server
   doesn't recognise your system when it uses the VIPA address for commun-
   ication.  (When account data is held in LDAP, your system has to do LDAP
   lookups to resolve all the UIDs and GIDs in the ls output to
   names...)  If this is the case, find out what configuration would be
   stopping the LDAP server from talking to you.  Routing might be a problem
   here -- can hosts in the network route traffic to your VIPA?

Hope this helps...

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Guest LAN broken on SLES9 SP1...?

2005-04-17 Thread Vic Cross
On Wed, Apr 13, 2005 at 11:51:46AM -0400, Dennis Musselwhite wrote:
 VM63261 is a z/VM 4.3.0 APAR and the most recent service affecting Guest LAN
 on z/VM 4.3.0 would be VM63655 (PTF UM31332) and all prereqs.  If that does
 not fix the problem we will need more diagnostic information.

I ran the exact configuration (same Linux minidisks, same Guest LAN config,
same SYSTEM CONFIG, same machine/LPAR config) under z/VM 5.1 and everything
worked as expected.  So it does imply there is a z/VM maintenance dependency
for recent versions of the Linux qeth driver (for grins one day I might
apply maintenance to the previous system just to know for sure, but I have
things working now, and too few hours in the day...).

Sorry to bother you all to advertise my lack of maintenance discipline :(  I
won't feel so bad if you all take the hint, though! :)

Cheers,
Vic

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Guest LAN broken on SLES9 SP1...?

2005-04-13 Thread Vic Cross
G'day all,

I'm stumped.  I just applied SP1 to a working SLES9 system, and after the
IPL my Guest LAN connection now appears to be non-functional.  I've checked
all the obvious things but so far I'm none the wiser.  Connectivity to
another system on the same Guest LAN is fine.

I cannot ping VM TCPIP from Linux, or Linux from VM (VM TCPIP is my router).
Obviously there's no SSH or other traffic flow either.

I hit the archives because the problem sounded familiar to me, but I only
see problems with the dodgy SLES8 update from a couple of months back.  I've
tried turning on fake_ll, which didn't change anything.  I've tried applying
the most recent SLES9 kernel update (beyond what SP1 brought in), which also
didn't change anything.  I don't want to have to go back to the pre-SP1
kernel, because the updated kernel was the reason I put on SP1 :-(

Other things I've checked include firewall rules (there are none), MTU size
(matches, and won't affect small pings anyway), routing in z/VM TCP/IP.

This z/VM system doesn't get much service attention (Q VMLAN says
VM63261) -- could it be some maintenance I need?

Any ideas?  (Hopefully) Relevant displays below.

Cheers,
Vic -- 11:20pm here now so responses in the morning; hope you all have a
better today than I had! :)


CP Q NIC DET
Adapter 0F00  Type: QDIO  Name: LXGLAN  Devices: 3
  Port 0 MAC: 02-00-00-00-00-04  LAN: SYSTEM LXGLANMFS: 8192
  RX Packets: 83370  Discarded: 0  Errors: 0
  TX Packets: 36146  Discarded: 0  Errors: 0
  RX Bytes: 64694381 TX Bytes: 2427470
  Connection Name: HALLOLE   State: Session Established
  Device: 0F00  Unit: 000   Role: CTL-READ
  Device: 0F01  Unit: 001   Role: CTL-WRITE
  Device: 0F02  Unit: 002   Role: DATA
  VLAN: ANY  Assigned by user
Unicast IP Addresses:
  172.18.8.31  Mask: 255.255.255.0
  FE80::200:0:500:4/64
Multicast IP Addresses:
  224.0.0.1MAC: 01-00-5E-00-00-01
  224.0.0.251  MAC: 01-00-5E-00-00-FB
  239.255.255.253  MAC: 01-00-5E-7F-FF-FD
  FF02::1  MAC: 33-33-00-00-00-01
  FF02::1:FF00:4   MAC: 33-33-FF-00-00-04

ip route
172.18.8.0/24 dev eth2  proto kernel  scope link  src 172.18.8.31
127.0.0.0/8 dev lo  scope link
default via 172.18.8.254 dev eth2

ip addr
1: lo: LOOPBACK,UP mtu 16436 qdisc noqueue
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 brd 127.255.255.255 scope host lo
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0: BROADCAST,MULTICAST mtu 1500 qdisc noop qlen 1000
link/ether 02:00:64:4d:00:80 brd ff:ff:ff:ff:ff:ff
3: sit0: NOARP mtu 1480 qdisc noqueue
link/sit 0.0.0.0 brd 0.0.0.0
4: eth1: BROADCAST,MULTICAST mtu 1500 qdisc noop qlen 1000
link/ether 02:00:64:8d:00:80 brd ff:ff:ff:ff:ff:ff
5: eth2: BROADCAST,MULTICAST,UP mtu 1492 qdisc pfifo_fast qlen 1000
link/ether 02:00:00:00:00:04 brd ff:ff:ff:ff:ff:ff
inet 172.18.8.31/24 brd 172.18.8.255 scope global eth2
inet6 fe80::200:0:500:4/64 scope link
   valid_lft forever preferred_lft forever

uname -a
Linux blxs901 2.6.5-7.151-s390 #1 SMP Fri Mar 18 11:31:21 UTC 2005 s390 s390
s39
0 GNU/Linux

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Per engine pricing..

2005-04-11 Thread Vic Cross
On Mon, Apr 11, 2005 at 09:15:00AM -0500, Tom Duerbusch wrote:
 Now I don't know many department managers that would spend X dollars
 out of their budget to save Y dollars in someone elses budget.  You
 might use the savings to justify the expenditure up the line, but the
 savings are soft dollars.  Never measured.  Never seen.

So you just need to go a bit further 'up the line' to find the point at
which the two departments have a common manager, and sell it to him/her. :)

At some level in any organisation there's someone who would have visibility
of both budgets -- if that's the Managing Director (CEO, President, whatever)
then it might be a bit harder but it doesn't make the point invalid...

The nice thing is that cutting utility costs doesn't (usually) do anyone
out of a job -- it makes for an easier cost-saving proposition than the
budget cuts that might mean job losses.  How many CEOs would knock back a
proposal that: a) reduced overheads, and b) didn't involve giving anyone the
sack?

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: appropriate value of LOAD AVERAGE

2005-01-31 Thread Vic Cross
On Thu, Jan 27, 2005 at 08:33:28AM -0800, Fargusson.Alan wrote:
 An uptime of 1 to 2 indicates that you have something running in a loop.  You 
 should use the Linux top command, or the ps command, to find it.

If you mean that in a negative sense (a 'crashed' process in a tight loop,
for example), that's a bit of a generalisation.  Sure, it might be a dud
process, but it could equally be your production database or web server
handling a lot of work at this minute.

If this is observed on a machine like one of Rob's or Adam's, you'd be right
in thinking that something was wrong somewhere.  Likewise, if all the
intervals show almost the same number, you can see that it's not a spike of
CPU usage.

When I'm compiling programs on my dual-CPU systems, I regularly see load
averages of 3 to 4.  I've seen 6 and higher at times.  A fellow I work
with uses the uptime output from a *very* busy machine he administers as
part of his .signature: numbers well over 400 for 1-, 5-, and 15-minute
intervals.

I guess I just wanted to say that load average equal to or greater than the
number of CPUs is not necessarily a bad thing.  As a sysadmin, the 'normal'
number you'd expect to see will vary from system to system, and you get to
know it -- when something abnormal appears, you just know. :)

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: CTCMPC Driver

2005-01-25 Thread Vic Cross
On Thu, Jan 20, 2005 at 10:04:40AM -0800, Wolfe, Gordon W wrote:
 I cannot find the ctcmpc.o module anywhere on my SLES8-SP3 (31-bit) system.  
 Nor can I find it on any of the installation or service rpms I have 
 downloaded from the SuSE site.  Can anyone tell me where it might be?

Gordon,

As Mark indicated it is a fairly recent addition to the kernel patches.

The release notes for Communications Server for Linux on zSeries (CSLz?)
state that the protocol supported by the CTCMPC driver is not the MPC+
protocol we've kind-of taken for granted with VTAM (it's the earlier, pre-
plus version).  So I'm not sure what kind of performance to expect.

I'd suggest using Enterprise Extender over a Hipersockets as your path back
to VTAM.  It is quite easy to set up -- yes, even on z/OS, if you haven't
already started using it yet :) -- and you get to take advantage of the speed
of Hipersockets for the path between Comms Server on Linux and VTAM (and the
link is fully APPN/HPR-capable as well).  Most importantly, no special
device drivers are needed.  I run this way, linking a CSLz instance to a z/OS
1.4 system over a Guest LAN (just to say I could ;), but also to demonstrate
that the type of IP connection doesn't really matter).

EE also helps simplify things if you want to link to systems in other CECs
-- even other sites.  Just add another station definition with the correct
IP address, and let the routing network handle it.

Drop me a line off-list, if you like, if you've got questions about EE.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: VIPA and hot standby

2004-12-08 Thread Vic Cross
On Wed, Dec 08, 2004 at 01:06:31AM -0500, Alan Altmark wrote:
 Do the open source solutions deal with STOMITH?  SAfLi will perform FORCE
 or will use LPAR management APIs to kill off the offending machine.

STO[MN]ITH is important in the clustering techniques that involve all the
machines actively listening for traffic and independently making decisions
about who shall respond (like in DHCP, for instance, where clusters of
DHCP servers will all respond to a request for a lease, and the client
just picks the first (or otherwise best) reply it gets...  The servers have
delaying techniques that ensure a lightly-patronised server will get in
ahead of a heavily-patronised one, but I digress...)

In clustering techniques where only one server can receive requests (like
what you'd have just using Keepalived, for instance), STO[MN]ITH might not be
so important -- no traffic is sent to the bogus host, which can be just as
good as it being shot.  You're obviously at the mercy of what method you use
to determine that a host needs to be shot -- but that's true of any failover
system.

Note I say can be as good -- obviously there might be other good reasons
for wanting the lame duck out of the pond.  DoS of the environment through
excess CPU consumption or network jabbering are examples.

Just trying to illustrate that sometimes you don't need *all* the bells and
whistles.

Back to the original question: yes, a quick search of the Linux-VS site does
show some information about STO[MN]ITH capabilities.  There is also a lot of
kernel heartbeat support that in PC-land interfaces with hardware (and
software) monitors to trigger a reset.  I think I even recall seeing something
in Keepalived.  I don't imagine it being too difficult to interface that stuff
to the z/VM SMAPI, or even the new feature that came in with z/VM 5.1 that
the very thing.

Cheers,
Vic Cross
  at home, speaking for myself only

PS:
STOMITH: Shoot The Other Machine In The Head
STONITH: Shoot The Other   Node  In the Head
Yes, I know: you say tomato, I say tomato... :)
Just in case anyone saw STOMITH and wondered if that was anything like
STONITH -- they're the same.  Well, except that N might be a little more
appropriate in a z/VM scenario, as we probably don't want to shoot the whole
machine... :)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: VIPA and hot standby

2004-12-07 Thread Vic Cross
On 08/12/2004, at 10:21am, Tom Shilson wrote:
Would you care to mention the names of a few of those packages?
Keepalived is one.  http://www.keepalived.org.  I wrote this up in a
Redpaper a little while back (REDP-3657, Virtual Router Redundancy
Protocol on VM Guest LANs).
Although my paper talked about using Keepalived for router redundancy
(simply using the VRRP component), it is also useable for the intended
purpose.  VRRP provides an application IP address which will transfer
from one machine to the other as required.  What you do with that IP
address is up to you: I simply used it for redundant IP gateway
function (the traditional purpose of VRRP) but you could easily host
applications behind them.
Alan S, you might get value out of the status checking features of
Keepalived that let you point the daemon at a static web page (for
example) and make failover decisions on the actual status of the
application.
As Mark mentioned, there are quite a few of these packages.  Keepalived
(which is a sub-project of Linux Virtual Server, btw) is one that from
experience works on Guest LANs (so should work fine on VSWITCHes too).
Cheers,
Vic Cross
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Hipersockets and z/VM access

2004-11-25 Thread Vic Cross
Ranga Nathan ([EMAIL PROTECTED]) wrote:

 Changing the MTU to 8192 made TCP/IP force it to 16384 and everything
 started working!

The MTU size must be set appropriately with respect to the definition of the
Hipersockets CHPID in HCD.  This definition in effect determines the maximum
'block size' that can be sent on that CHPID, and there are corresponding
maximum MTU sizes that must be used.  Note that the largest MTU is less than
the hardware block size!

You'll need to check what was defined in HCD for that Hipersockets, and then
verify the largest MTU you are allowed to use -- and if it's a Hipersockets
Guest LAN on z/VM, check the DEFINE LAN as that hardware setting is mirrored
there.

 It is like dealing with teenagers!

I would describe everything about working with computers that way!  ;)

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Hipersockets - z/OS side

2004-11-24 Thread Vic Cross
On Wed, Nov 24, 2004 at 05:15:16PM -0500, Peter E. Abresch Jr.   - at Pepco 
wrote:
 Yes, QDIO and hipersockets (iQDIO) require VTAM which is part of z/OS's
 Communication Server.

To expand on this slightly...

For several releases of Comms Server (since about OS/390 2.5, ISTR) VTAM
has done all communications I/O on behalf of TCP/IP[1].  For TCP/IP device
types that were in existence at the time, the change was done so that
TCP/IP device definitions in VTAM were dynamically handled by VTAM under
the covers and transparently to sysprogs.

For the new device types, since it was a clean slate, perhaps it was
easier to have the sysprog do the VTAM work than code the automagics.  :)

Cheers,
Vic Cross

[1] With the introduction of Enterprise Extender (HPR over IP) at around
OS/390 2.8, there was a neat circular dependency created: TCP/IP had to
start after VTAM for TCP/IP devices and links to initialise properly, but
TCP/IP had to be up before VTAM for the VTAM EE devices to start properly
at VTAM startup...  It's all fixed now, though (TCP/IP was altered so that
it could start before VTAM, and delay its device starts until VTAM was up).

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Updated Red Paper REDP3719?

2004-11-08 Thread Vic Cross
On 09/11/2004, at 7:26am, Chan, Emil wrote:
Is there an updated version (for zVM 5.1) of IBM Red Paper REDP3719 -
Linux on IBM eServer zSeries and S/390: VSWITCH and VLAN Features of
z/VM 4.4.
I sent a note to Emil off-list on this, but at this time there is not
an update to this paper.
Cheers,
Vic Cross
(at home, speaking for myself)
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: VLAN Connectivity w/ RHEL 3

2004-11-07 Thread Vic Cross
On Fri, Nov 05, 2004 at 01:21:36PM -0600, Mark Wheeler wrote:
 Has anyone successfully connected to a VSWITCH from a RHEL 3 guest? In our
 first go-around the real switch port we're connected to was configured as a
 host port, and we were able to get through. When it was reconfigured as a
 trunk port (where we want to be), I can't seem to make the connection. I'm
 running kernel 2.4.21-9.

Since it all worked when the switch port was 'host', that proves your VSWITCH
attachment to the Linux guest is okay.  Perhaps the trunk port was not added
to all the VLANs it needs to belong to, or the routing instance in the switch
does not belong to the right VLAN.

Try changing things incrementally.  Instead of changing from no-VLAN-at-all
to fully-VLAN, try GRANTing your guest just to a single VLAN with no VLAN
config in your Linux guest.  This will mean the VSWITCH will do your VLAN
tagging for the VLAN you want (the one you GRANT the guest to) on the guest's
behalf, which will test the switch configuration.  If that works okay, then
progress to configuring a single VLAN virtual interface in Linux for the VLAN
you're GRANTed to -- this should function the same.  When that works, add
additional VLAN virtual interfaces in Linux.

  q vswitch all det
 VSWITCH SYSTEM VSWF9Type: VSWITCH  Active: 1 MAXCONN: INFINITE
   PERSISTENT  RESTRICTEDPRIROUTER  MFS: 8192 ACCOUNTING: OFF
   State: Ready
   CONTROLLER: VSWCTL2   IPTIMEOUT: 5 QUEUESTORAGE: 8
   PORTNAME: OSAF9   RDEV: F900 VDEV: FFF0
   RX Packets: 0  Discarded: 0  Errors: 0
   TX Packets: 58 Discarded: 0  Errors: 0
   RX Bytes: 0TX Bytes: 4872
 Authorized userids:
   L277747R VLAN:  4066   4067
   SYSTEM   VLAN:  ANY
 VSWITCH Connection:
   Device: FFF2  Unit: 002   Role: DATA
   VLAN: ANY  Assigned by user
 Adapter Owner: L277747R NIC: 2D00  Name: L277747R
   Device: 2D02  Unit: 002   Role: DATA
   VLAN: ANY  Assigned by user

 /etc/chandev.conf
   noauto
   qeth0,0x1d00,0x1d01,0x1d02,0,0

 qeth1,0x2d00,0x2d01,0x2d02,0,0;add_parms,0x10,0x2d00,0x2d02,portname:L277747R
 /etc/sysconfig/network
   NETWORKING=yes
   HOSTNAME=l277747r
   VLAN=yes
 /etc/sysconfig/network-scripts/ifcfg-eth1.4066
   DEVICE=eth1.4066
   BOOTPROTO=static
   IPADDR=169.10.238.131
   NETMASK=255.255.255.192
   ONBOOT=yes
 /etc/sysconfig/network-scripts/ifcfg-eth1.4067
   DEVICE=eth1.4067
   BOOTPROTO=static
   IPADDR=169.10.238.194
   NETMASK=255.255.255.192
   ONBOOT=yes
 /proc/net/vlan/config
   VLAN Dev name| VLAN ID
   Name-Type: VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
   eth1.4066  | 4066  | eth1
   eth1.4067  | 4067  | eth1

I don't see what your ifcfg-eth1 looks like.  Possibly the problem is that
you still have an IP address and/or route table entries on the non-VLAN
primary interface, and your traffic is being discarded (since you're GRANTed
to more than one VLAN ID, VSWITCH will not tag-and-send untagged frames since
it cannot guess which VLAN ID to tag them with).

Also, I see a qeth0 in your chandev.conf: make sure that traffic is not being
routed out that interface instead of through the VSWITCH.  If you're still
having trouble, we'll need to see the output of your favourite route table
display command (netstat -r, route, ip route ls).

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SLES9 Yast2 mail and MTA config

2004-10-28 Thread Vic Cross
On Thu, Oct 28, 2004 at 05:34:23PM -0500, Adam Thornton wrote:
 On Thu, 2004-10-28 at 17:21, David Boyes wrote:
  Small scale developer workstations or dedicated application
  workstations that do little or no application DNS traffic.

 Yeah, but if they do so little DNS traffic, then why not, you know, just
 let them hit the *real* DNS server?

Pretty sure that nscd caches more than just DNS.  If you're using NIS or
LDAP, for example, nscd is supposed to prevent you from driving your
information service into the ground whenever someone does an ls -l.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: z/VM LPAR Configuration Question

2004-10-24 Thread Vic Cross
On Sun, Oct 24, 2004 at 10:53:02AM -0400, Peter E. Abresch Jr.   - at Pepco wrote:
 Do I define the VM LPAR as Linux only?

Short answer: yes.

Longer answer: Defining an LPAR as Linux Only is what tells the system that
you want that workload to be dispatched on IFLs -- what's the L in IFL stand
for again? ;)  The fact that z/VM is not Linux is not important here -- you
are running z/VM to support Linux workload.

IIRC, the archives have longer discussions about what kinds of z/VM workload
is permitted on IFLs, and other such topics for night-time reading. :)

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SLES9 installation problem - again

2004-10-22 Thread Vic Cross
On 22/10/2004, at 11:48 pm, Alan Altmark wrote:
David, the 1000base-T feature can operate in QDIO or non-QDIO mode.
The
Gigabit Ethernet feature operates only in QDIO mode.
Okay then... (we're all learning on this one  :)
Back to the IOCDS then: since we are really trying to work in LCS mode,
is OSE the correct CHPID type?  Should it be OSA?
Cheers,
Vic Cross
PS: On Gigabit/1000BaseT -- it's disappointing that the industry
chooses to use the generic term Gigabit Ethernet to refer to a
specific implementation of Ethernet that operates at gigabit speed.
Sigh...  No wonder everyone believes networking to be so confusing ;)
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SLES9 installation problem - again

2004-10-21 Thread Vic Cross
On Thu, Oct 21, 2004 at 09:10:37AM +0200, ??? ??  ??? wrote:
 My system has an OSA Express 1000Base-T which is configured using these IOCP 
 statements:
  CHPID PATH=(CSS(0),02),SHARED,*
PARTITION=((LINTST,PROD,TEST),(=)),TYPE=OSE,*
PCHID=141
  CNTLUNIT CUNUMBR=1200,PATH=((CSS(0),02)),UNIT=OSA
  IODEVICE ADDRESS=(1200,253),CUNUMBR=(1200),UNIT=OSA
  IODEVICE ADDRESS=(12FE,1),CUNUMBR=(1200),UNIT=OSAD

Technically, your CNTLUNIT and IODEVICE are incorrect[1].  Device type OSA
and OSAD are for older OSAs like OSA-2.  Use OSE or OSD instead -- unless
you *really* want your OSA-Express to run in LCS mode, but I doubt that works.

 I went back to the main menu and chose option 2 (Ethernet OSA). (This worked in 
 SLES8). I entered my first device address (1200) and waited. The message that came 
 up said:
 Lcs: loading LCS driver ($ Revision: 1.72.2.4 $/$ Revision 1.15.2.2 $)

 And it's been like that for quite a while.

 What am I doing wrong?

If your card is really an OSA-Express, do not use Option 2.  It's loading
the LCS module (as you can see) which is not correct for OSA-Express.

 Suggestions I received:
 1. Try option 3 - this did not work.

It should -- this might be your incorrect hardware definition causing a
problem.

 2. Issue the dmesg command to look for more information - How do I interrupt the 
 configuration script so I can issue commands.

You can issue ^c (without the quotes) to simulate a Ctrl-C, which should
break you out of the script and give you a prompt.

Hope this helps,
Vic Cross


[1] I say *technically* incorrect because I have a system where I have an
OSA-Express defined as OSA and it's okay.  However this is a z/OS system, and
the device type entries are only in the MVS definition not in the actual
IOCDS.  Why is it defined that way?  Long story.  :)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Formatting and Partitioning on SLES 8

2004-10-20 Thread Vic Cross
On Wed, Oct 20, 2004 at 04:44:33PM -0500, John Kaba wrote:
 SuSE Instsys zlinux:/root # dasdfmt -f /dev/dasda1 -b 4096 -d cdl
 ^^^
I don't think this is what you want.

If you've done dasdfmt and fdasd already on /dev/dasda, your next step is to
make a filesystem on the partition /dev/dasda1.  If you have not yet run
dasdfmt, you mean to format the drive (/dev/dasda) not the partition.

# dasdfmt -f /dev/dasda -b 4096
# fdasd -a /dev/dasda
# mke2fs -j /dev/dasda1

The first formats the device (CDL is the default nowadays, no need to specify
it), the second creates a single partition that uses all space on the device,
the third creates an ext3 filesystem on the partition (if you want a different
filesystem type, substitute the appropriate command).

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: 3270 console in Slack390 9.1

2004-09-28 Thread Vic Cross
On Tue, 28 Sep 2004, Richard Pinion wrote:
Now how did you know I was using Hercules.
Plenty of us do; it probably wasn't a difficult guess... ;)
In Hercules one starts a TN3270 session using the IP address of the PC
that is running Hercules and specifies port 3270.
That's the Hercules equivalent of plugging in a 3270 terminal.  Oh, and
switching it on, of course.
I'm doing this but I'm not getting anything.
Are you running a program against the device that represents your 3270?
If you want to log in via your 3270, check that a getty or equivalent
(mgetty, mingetty) is running for the device.  These are usually started
from inittab, but you could test just by starting it manually from the
command line of another (working) session.  You'll need to know the device
node name (the /dev/ name) of your 3270 and specify that on the getty
command.
Cheers, HTH,
Vic Cross
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Question how to use define two OSA adapters and use them with VIPA under z/VM

2004-09-10 Thread Vic Cross
On Thu, 9 Sep 2004, ING. A. Neij wrote:

 I wonder how to define VIPA correctly.
 After defining VIPA for the TCPIP guest I can reach/ping  the z/VM LPAR at
 three IP addresses (see under for extract of profile tcpip).

Your profile looks okay, with one observation:

 HOME
   10.12.100.10 VIPA1  ; -- SOURCEVIPA for ETH0 and ETH1
   10.12.100.08 ETH0
   10.12.100.09 ETH1
 ; (End HOME Address information)

This obviously works for you, but might lead to problems due to ARP
caching in your hosts and routers.  VIPA addresses are normally allocated
in a separate subnet from the physical network interfaces, so that the
VIPA address itself is never directly associated with an ARP entry.  That
way if a network link fails, routing will take care of finding a new path
to the destination and you don't have to worry about old entries in ARP
caches.

 When I take a look at the devices I can see OSA device 1500 is used but
 OSA device 1300 is not used???

I'm going to assume that you want traffic to be load-shared across the two
ports...  This will not happen automatically.

The route table in a TCP/IP host controls which interface is used for
outgoing traffic.  Basically, the IP stack will look for the most specific
route that matches the destination of the packet to be transmitted.  When
the route table shows more than one entry for a given route (ETH0 and ETH1
in your case), even if that's the default route, it will simply use the
one that's higher in the table -- in your case, it seems that is ETH0.

Remember that VIPA is first-and-foremost a redundancy enabler for higher
availability.  It does not address load sharing or balancing.

 Where else do I have to define other/additional  OSA devices e.g. to use
 them with VIPA?

If you're at z/VM 4.3 or higher, check out your TCP/IP manual for
EQUALCOSTMULTIPATH in the ASSORTEDPARMS.  This allows traffic to be
balanced over equal cost routes (i.e. adding a little more smarts to the
decision path I described above).  This works for static routes as well as
OSPF or RIP, but if you're using ROUTED for dynamic routing it will get
turned off (so migrate to MPROUTE :).

Note that this will only alter the behaviour of traffic that leaves your
TCP/IP stack.  It does not change how traffic is sent to you.  If you
stick with your current config you may end up with a fairly even
distribution anyway (due to some randomness in the timing of ARP responses
from your OSAs), but that's not to say that it's a reason to stay that way
;)

Cheers, hope this helps,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Question how to use define two OSA adapters and use them with VIPA under z/VM

2004-09-10 Thread Vic Cross
On Fri, 10 Sep 2004, Harold Grovesteen wrote:
 These parameters provide load sharing for outbound traffic from z/VM to
 the network.  Work with your networking people to ensure that they are
 supporting equal cost routing to provide load sharing of traffic inbound
 to z/VM.  Use OSPF if you can for quickest recovery of a network path
 failure.  You and the routers adjacent to z/VM must use the same routing
 protocol for this to work.  Even if your network generally uses a
 different routing protocol (EIGRP for example), OSPF or RIP can be
 integrated on the network side to accommodate the z/VM requirements.

I started to say something similar last time, but didn't follow it through
:)

On this topic, I have to add one comment about the original poster's
configuration (refresher: that config has the VIPA as another address on
the same Ethernet segment as the OSAs).  When it comes to balancing
inbound traffic, no routing configuration will reliably ensure that
traffic is shared between the interfaces.  Reason: the neighbouring
routers do not actually route to reach the VIPA, since it is the same
subnet.  They will simply use ARP to reach it, and then you're just
getting the ARP response randomness I mentioned last time.

While it might be possible to force-advertise the VIPA via OSPF (or RIP)
using a host mask, which would in theory result in the two equal-cost
routes we need for multipathing, whether the neighbour routers actually
took any notice of those routes would be another question.

I don't mean to harp on that aspect of the configuration, but this was
something that occurred to me as important in this context.


Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SLES9 I must be stupid

2004-09-05 Thread Vic Cross
On Sat, 4 Sep 2004, Gary A. Ernst wrote:

 The release notes indicate that you copy... detail snipped

The same release notes have an (easily missed, IMHO) statement that the
installer can only use a relative directory path for FTP installs.  That
means that your install tree MUST be underneath the home directory of the
userid you are using to access the FTP server.  This applies to anonymous
FTP as well.

I used a bind mount to do this (I could not use a symlink as my
installation files are on a different filesystem and vsftpd doesn't follow
symlinks outside the initial filesystem).

# mount -o bind /data/installtrees/suse/sles-9-s390/ /srv/ftp/suse

where the tree of install directories has been set up at
/data/installtrees/suse/sles-9-s390, and the FTP home directory (for
anonymous FTP) is /srv/ftp.  Now in YaST, the path to the installation is
just ftp://192.168.0.1/suse.

Of course you don't have to bind mount...  You can also set up the
installation tree under the FTP user's directory and loopback mount the CD
images there (which means that very little data really exists under the
FTP directory, just the spots to mount the CDs).  Be careful that you can
successfully browse via FTP into the loopback-mounted CDs though -- again,
vsftpd gets funny about crossing between filesystems.  For some reason it
doesn't mind bind mounts -- but that may be because I used a real FTP
userid and set up the bind mount under that user's home directory rather
than using anonymous.

Hope this is helpful...

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Tcpdump on Linux-390.

2004-08-20 Thread Vic Cross
On Friday 20 August 2004 18:23, Rob van der Heij wrote:
Rv I know better than argue with you... I recall the early version had
Rv tweaked one of the formats since I could not use the same thing to
Rv decode a tcpdump stream from my PC anymore.

Good point.  If you run tcpdump-qeth -r  on a file you captured on a PC, I
suppose you will get garbage because the script will add an extra LLC header.
The real tcpdump program should have been used in this case.  Hopefully
SuSE's package did not make tcpdump a link to tcpdump-qeth...  :)

Last year I was using tcpdump-qeth to capture a trace (using the -w option)
from a Linux guest on a VSWITCH, and I was able to view the result with
standard ethereal on my laptop.  So I know it works! ;)  (For the
ultra-curious, on page 36 of the VSWITCH Redpaper you can see the result of
such a capture.  Notice the all-zeroes MAC addresses in the Ethernet header
-- that 'empty' Ethernet-II header is what tcpdump-qeth adds to the packet
flow to keep tcpdump and friends happy.)

Cheers,
Vic

PS: The same Redbook credits tcpdump-qeth to Holger Smolinski.  See, I knew it
was an IBMer!  Sorry, Holger ;)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Dynamic I/O Configuration and removing eth1

2004-08-20 Thread Vic Cross
On Friday 20 August 2004 09:07, Peter E. Abresch Jr.   - at Pepco wrote:
PE Thanks for responding. Here is what I get:
PE
PE linuxd01:/etc/sysconfig/network # lsmod
PE Module  Size  Used byNot tainted
PE qdio   37040   0
PE 8021q  15256   0  (unused)
PE nfsd   80392   4  (autoclean)
PE ipv6  329288  -1  (autoclean)
PE key41840   0  (autoclean) [ipv6]
PE lcs29248   1
PE dummy0  1108   1
PE ctc49824   3
PE fsm 2032   0  [ctc]
PE lvm-mod70408  11  (autoclean)
PE dasd_eckd_mod  56648   5
PE dasd_mod   49748   7  [dasd_eckd_mod]
PE reiserfs  254940   7
PE
PE What do I removed. I still have eth0 going so I cannot remove lcs.

Firstly, make sure you've removed the old LCS definition for eth1
from /etc/chandev.conf.  Then, two things to try:

echo reset_conf  /proc/chandev
echo read_conf  /proc/chandev
echo reprobe  /proc/chandev

This *might* cause the LCS driver to release its definition for eth1.

Otherwise, change the qeth1 at the start of the line for your new
interface in /etc/chandev.conf to something else -- I'd say qeth-1, to let
it pick -- and repeat the above three commands.  Your interface will probably
be eth2 until your next reboot, however, so this might be more trouble to you
than it is worth to do it dynamically...

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: mp2003

2004-07-27 Thread Vic Cross
On Tue, 27 Jul 2004, Noll, Ralph wrote:

 Ok... I have put the debian in vm rdr.
 Ran rexx exec to copy to tape

 Ipl from the mp2k3 the tape
 I get a disabled wait 000a0

You are likely to need the tapemarks between the files...  David gave the
command to do this with his REXX version of the script.  Put one tapemark
between each file and two at the end, and see what difference it makes.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Deleting files in a directory more than 5 days old

2004-07-27 Thread Vic Cross
On Thu, 22 Jul 2004, Post, Mark K wrote:

 If you want files that are older than 5 days, and haven't been accessed
 in that time, the -atime predicate does that.

Beware that some distro vendors and performance mavens are starting to
recommend that the noatime mount option be used to increase disk
performance.  Before using 'find -atime', check if your filesystems are
mounted this way and test in an inconspicuous area to see that the
result is as you expect (I'm pretty sure that noatime will break it).

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: mp2003

2004-07-26 Thread Vic Cross
Sorry Ralph, a bit late to this thread.  :)

On Fri, 23 Jul 2004, Noll, Ralph wrote:

 Can I or can I NOT bring up the marist version and
 Install Debian?

If you want to use the Marist build as a starter system for the Debian
install, the answer is yes.  Look for debootstrap -- it allows you to
install a Debian system into a new set of disks mounted at an arbitrary
mount point.  It does not matter what distro you currently have installed.
A quick Google came up with http://www.inittab.de/manuals/debootstrap.html
(this describes using a Knoppix CD and debootstrap to install Debian on a
PC, but it's an example of how distro-agnostic debootstrap is).

You cannot (should not) install over the top of your existing Marist, but
on your MP2k3 it would just be a matter of defining new emulated devices
for your new Debian system and mounting them up.

Catches are you need to obtain and build the debootstrap source on the
existing system, meet a couple of pre-requisites (including wget) and said
existing system needs to be able to get network access to a Debian APT
repository (but you were going to be configuring for one of those anyway,
right?).

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: mp2003

2004-07-26 Thread Vic Cross
On Fri, 23 Jul 2004, Adam Thornton wrote:

 If you're running on an MP2k3, you could also get Matt Zimmerman's
 preconfigured DASD image (50 MB) from:
 http://people.debian.org/~mdz/hercules/Debian-3.0r1.3390

Sorry Adam, Matt's image uses the Hercules CCKD compressed format so it
won't work on other emulators.  Good thinking, but!  :)

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: VM TCPIP guest router not on GLAN?

2004-07-26 Thread Vic Cross
G'day Daniel,

If there were no changes to PROFILE TCPIP, SYSTEM CONFIG or the directory
before the DR test, it would Just Work.  I suspect that a change to one of
these snuck in, and either VM TCPIP had the wrong IP address (at the Guest
LAN interface) or the TCPIP machine was not attached to the Guest LAN.
Depending on how your routing is set up, it might also have been a problem
with the route table on VM TCPIP.

On Mon, 26 Jul 2004, Daniel Jarboe wrote:

 VM under VM, and the first time we tried to bring it up the OSA on 1st
 level VM was defined odd-even-odd.  But we shutdown, got it redefined
 even-odd-even, and started again (this time WARM).

Your difficulties talking to the OSA would -- should ;) -- not have
impacted the Guest LAN.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: VM TCPIP guest router not on GLAN?

2004-07-26 Thread Vic Cross
On Mon, 26 Jul 2004, Daniel Jarboe wrote:

 In this case would the ping to the gateway ip address from
 the outside have succeeded?

It may well have -- at least it would on most Linux systems.  Linux, for
example, will respond to a packet addressed to any of it's configured IP
addresses from any active interface.  In practical terms, if you have an
internal network interface and you ping its address from the outside, the
outside interface will respond.  As long as the internal address is
configured, the external interface will respond to the pings -- even if
the cable is unplugged on the internal interface.  If VM TCPIP behaves the
same way, then you'll be thinking that all is well on the inside...

 For future reference... how would we reattach it :)?

 They tried a:
 DEFINE NIC 700 QDIO DEV 3
 COUPLE 700 SYSTEM VMGLAN
 from TCPMAINT, but I think it should have been done for TCPIP
 instead.  How would they have done that?

Exactly right -- the above would have defined and coupled a new NIC to
TCPMAINT, not TCPIP.

I had to do this once, but do you think I can remember how?  :)
A couple of options come to mind:
* use OBEY to send the commands to TCPIP
* one of the options on the IFCONFIG command

You may also be able to use the message interface, but I'm not sure on
that...


Cheers,
Vic

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: SuSE 8 - package installation or build from targz?

2004-07-19 Thread Vic Cross
On Mon, 12 Jul 2004, Post, Mark K wrote:

 I don't know what traditional method you're talking about that did
 dependency checking.

Ranga, are you referring simply to autoconf?  Sure, the packager of
Product A should be making sure that the configure script ends up checking
for all of the pre-requisites Product A has (Product B, Library C, sane
gcc, etc).  This is not true dependency checking, however; because it
occurs only at the time of Product A's build, there is nothing stopping
you from subsequently updating Product B or Library C to a version that
will not work for Product A, or removing Product B or Library C
completely.

These are the kinds of issues that package managers like RPM try to avoid.
Mixing RPMs and source-builds will create the kind of woe that Mark is
referring to (you think you've got dependency problems *now*).

A common complaint among those who experiment with mixing RPMs and
source-builds goes like this: Please help me, I am trying to install
foojit-1.2.3-s390.rpm, and it says that libpants.so.1 is not installed,
but it is, because I built the pants library from source and installed it,
why is RPM lying?  :)

If you really want to build things from source on an RPM-based distro
(usually this would be because you want a version later than that which
your distributor uses), build an RPM instead of installing directly from
the source.  In locating or writing a .spec file (the file that determines
how the RPM is built), try to get one from the distro you're working with,
for a version of the product as close to the version you're building as
possible.  I've had a bit of luck building things for S/390 just by using
the SRPM from development versions (Red Hat's rawhide, for instance), or a
spec file from an Intel version of the package (again with Red Hat, before
RHEL 3 came along and packages from RHL 7.2 were getting woefully out of
date, I built later versions of things like Zebra by using SRPMs from RHL
8 or 9 or RHEL 2.1).

Give it a go, RPM is not really very scary.  By doing this you get the
version of the package you want, built locally with dependencies that you
manage, and still get RPM's dependency checking (as long as you define the
right dependencies in your spec file and don't just remove everything so
that the package will install, which defeats the purpose).  It comes at a
cost though;  I doubt SUSE or Red Hat will support you for packages you
built yourself.

Which leads to the other alternative: Mark's suggestion to try another
distro.  There's a bit of variety around now that Slackware and Gentoo
have joined Debian in the mix (since Tao is built from RHEL3's SRPMs, it
won't relieve RPM-grief).


Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Novell Linux Technical Resource Kit

2004-06-30 Thread Vic Cross
On Tue, 29 Jun 2004, Adam Thornton wrote:

 Has anyone ever had ACPI do anything useful for them?

One of my boxen here has an Asus P4P800 MB, carrying an Intel P4 with HT.
Unless I enabled ACPI in the kernel (2.6.4-ish, was the same with
late-2.4) the second CPU was not recognised.  Might have been a
peculiarity of the mobo or chipset, but that was useful to me ;)

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: logical volume question

2004-06-30 Thread Vic Cross
On Tue, 29 Jun 2004, Andy Engels wrote:

 I'm dropping in some software and all of a sudden I'm out of space.  Is
 there a way to verify that /home is really the mounted logical volume file
 system?

As usual with UNIX/Linux, there's more than one way to see what your
mounted filesystems are.  Two common examples (these from an i386 system):

$ mount
/dev/root on / type ext3 (rw)
none on /dev type devfs (rw)
none on /proc type proc (rw)
/dev/hda2 on /usr type ext3 (rw)
none on /dev/shm type tmpfs (rw)
/dev/vg0/newhome on /home type ext3 (rw)
/dev/hdc1 on /mnt/hdc1 type ext3 (rw)
/mnt/hdc1/ISOs on /home/samba/ISOs type none (rw,bind)
none on /proc/bus/usb type usbfs (rw)

$ df
Filesystem  1K-blocks  Used Available Use% Mounted on
/dev/root 1921156   1305408518156  72% /
/dev/hda2 9614148   3002532   6123240  33% /usr
none   516288 0516288   0% /dev/shm
/dev/vg0/newhome 41284928  26131060  13056716  67% /home
/dev/hdc176922968  48704152  24311284  67% /mnt/hdc1
/mnt/hdc1/ISOs   76922968  48704152  24311284  67% /home/samba/ISOs

mount just tells you what's mounted where, df gives you utilisation data
as well.  You can see in these examples that my /home is on an LV, so you
should see something very similar on your system if /home is mounted
correctly.

If it's not, the problem might be that /etc/fstab was not updated after
the LV was created and formatted.  You'll need to fix that, but not before
moving the existing /home into the LV (without overwriting what may
already be on the LV, if it was at some stage successfully mounted).  Mark
has some hints on moving filesystems on linuxvm.org that might be useful
to you for this.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Using crypto cards with Linux

2004-06-24 Thread Vic Cross
On Wed, 23 Jun 2004, Wolfe, Gordon W wrote:

 The interesting thing is that these items show up on the other LPARS
 running VM, but not for the IFLs.  the other LPARs are set to ESA/390
 whereas the IFL domains are set to Linux Only.

There is a cryptic note in one of the Redbooks or manuals (sorry I don't
have the reference handy, but I'm almost positive it's in the Redbook)
that states that it is not possible to define crypto cards to an LPAR
which is set as Linux Only.  The reference indicated that the
restriction applied only to a certain z900 with a particular microcode
level, but perhaps not...  I first saw this reference nearly twelve months
ago, so a lot may have changed.

 I thought perhaps maybe crypto doesn't work on IFLs, but previous
 messages on these lists indicate that it does

I was about to say yes, it definitely does, until I realised that the
processor I was thinking of was using all-CPs at the time that crypto was
working...

If the Linux Only restriction is indeed true, then it would imply that
it is not possible to use crypto on IFLs.  However, I'd be surprised if
updating the microcode on the machine didn't fix it right up.  If you're
already up to date, well...

 My CE and I are obviously missing something.  Anyone have any idea what
 it might be?

Just for grins, try defining the LPAR as ESA/390 and see what you get
(pretty good chance it will work).  Then you (and your CE) will have some
ammo when you place the service call.  ;)

Other than that, it should all Just Work.  Load the z90crypt driver, and
run the test programs given in the Redbook -- you should get good results.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: First time Linux under S/390

2004-06-16 Thread Vic Cross
On Wed, 16 Jun 2004, Paul Burke wrote:

 I got it working but not in the manner I would have preferred. The MAN
 device connecting to the M3000 H30 operates as a Lan Channel Station with
 multiple virtual channels/connections. So I configured the Linux machine to
 be a part of the same subnet and gave it ownership of a virtual
 channel/connection. The benefit here is that I now know I have a working
 linux instance and am not wondering whether Linux, VM TCPIP, gateway
 statements, PC routing statements or the price of fish in Bratislava is the
 problem!

I know, I have trouble keeping up with the fish market as well ;)

You will obviously have limited numbers of virtual channels that you can
assign to Linux guests, so depending on how many guests you want to run
you may strike a problem with this method.

While VM TCPIP (or a Linux guest) can provide the software router
function that you spoke of, there is no real necessity to do this if your
environment is simple enough.  One alternative (which another poster
previously mentioned) is Proxy ARP, which (basically)  makes the Linux
guest appear as if it is in your Ethernet segment so no routing is needed
to reach it behind the VM TCPIP stack.  Note, however, that Proxy ARP only
works for guests at the end of CTC or IUCV links, so if you have a z/VM
Guest LAN in your future you need to consider the software router
approach now.

To switch to Proxy ARP, return your VM TCPIP and Linux guest to the
previous CTC configuration, add the ASSORTEDPARMS PROXYARP statement to
your TCPIP PROFILE, and change the IP address of the Linux end of the CTC
to an address that is within the 192.168.168.0/25 subnet of your Ethernet
(you could use the IP address that you've given the Linux guest now for
using the MAN).

As I said, if you are looking at the possibility of using z/VM Guest LAN
(either the so-called QDIO or the virtual HiperSockets) you will need to
look at routing.  You would set up your Guest LAN using the
192.168.168.128/25 subnet range you're already using, or you could get
adventurous and break that range further into /26 or /27 subnets which
would allow you to use multiple Guest LANs for different purposes.

Hopefully this helps a little toward the to-route-or-not-to-route
question, but I think Carlos hit the answer on the original connectivity
problem.  In my previous note I mentioned that there was work you would
need to do if you had an OSA, and it seems like the Bustech box has a
similar 'restriction'.

These devices employ a pre-routing function.  For packets that arrive on
the LAN interface, they need to know which connection (LPAR, guest, etc)
to send each packet to.  The decision is made using the packet's
destination IP address.  The device keeps a table which maps IP
destination to connection, and any packets for unknown destinations are
trenched (the OSA has a 'wildcard entry', which I mention later).  So,
your device needs to know about any IP addresses that live either in *or
past* the VM TCPIP stack in order for it to send the packet to VM TCPIP
for forwarding.  This is required regardless of whether you use Proxy ARP
or software routing.

 Whilst not a real requirement as yet I would like to have the option of
 operating VM TCPIP  as a gateway to Linux instances.

If the Bustech box has an equivalent to the OSA's primary router
function -- this is the 'wildcard entry'; nominate one connection that
receives traffic for any IP address that I otherwise don't know how to
handle -- I'd suggest setting that for your VM TCPIP connection.  This
will mean that any traffic arriving at the Bustech for which there is no
specific address entry will be sent to VM TCPIP.  The alternative is to
have to make a config change to this box every time you add a Linux guest
OR define the entire subnet of Linux addresses in advance; the latter was
not possible with the OSA-2, because the OAT had a (quite small) limit on
the number of addresses that could be defined there.

Hoping this helps!

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: First time Linux under S/390

2004-06-14 Thread Vic Cross
, that is stopping the
traffic flowing.  If that's clear, check that IP forwarding is enabled in
the VM IP stack (ASSORTEDPARMS NOFWD must not appear in PROFILE TCPIP, and
you should see IP forwarding is enabled in the TCPIP service machine's
spool output).

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: z/VM Linux Network recommendations

2004-06-14 Thread Vic Cross
On Wed, 9 Jun 2004, Lionel Dyck wrote:

 we are once again embarking on a linux on zseries pilot and our mainframe
 network folks are arguing about things...

Lionel,

I join the VSWITCH chorus!

Just be aware that using VSWITCH makes you totally dependent upon the
network guys for availability and failover.  What you gain from this lack
of control, of course, is a reduction in resource consumption from being
able to rid the environment of those pesky router guests, and the warm
feeling that comes from knowing that if connectivity to your guests goes
away it's Someone Else's Problem ;)

Remind them that availability requires a two-way street -- not only does
the request traffic have to reliably reach your guests, but the return
traffic has to be able to reliably leave the guests and get back to the
LAN.  In this scenario that means in addition to dynamic routing for the
network to learn the path to the Linux guests, VRRP or HSRP must be used
to present a single router image to your Linux guests (since they no
longer point to a nice stable z/VM TCPIP stack or Linux router guest
within your mainframe anymore, but a router out in the LAN...).

There has always been a need for us Penguin Farmers to work closely with
the router jockeys -- with VSWITCH, we have to work closer than ever.  It
seems like your network engineers are fairly together (I like the sound of
no static routing), so I expect you will have no trouble in this regard.

Another point -- don't get VLAN and VSWITCH confused.  VLAN is not
required for VSWITCH, meaning that if all your guests can appear to be on
the same network you do not have to configure any VLAN stuff at all (even
at the LAN switch ports).

Cheers, and best of luck,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Attempting to get hipersockets configured in linux

2004-05-27 Thread Vic Cross
On Thu, 27 May 2004, James Melin wrote:

 in /etc/rc.config (which was blank)

Here's a bug in your process.  SLES8 doesn't use /etc/rc.config -- but
even if it did and yours was blank, then you'd have the wrong file, I
think.  I believe that the location and function of the SuSEconfig file
was one of many things that changed between SLES7 and SLES8.  I don't have
a SLES8 system nearby right now, though...  On my SuSE 9 Athlon box there
is /etc/sysconfig/suseconfig, and it has the same look and feel as the old
rc.config, but it has no network configuration info in it.

 and lastly addded a qeth1 definition to  /etc/chandev.conf to reflect the
 devices assigned.

 qeth1,0x0ed9,0x0eda,0x0edb,0,0,0

I was going to advise you to add the additional parameters to
chandev.conf, as I've never configured a qeth device without them, like
so: add_parms,0x10,0x0ed9,0x0edb,portname:blah.  You're not setting a
portname though (which is fair enough now that it's no longer required) so
YMMV -- and indeed did, because the output of the driver loading that you
posted indicated that it loaded successfully and found the Hipersockets.

 The system comes up and I do an ifconfig to see what interfaces came up and
 hsi1 is not there.

I'd say that if you did ifconfig -a at this point you would see your
hsi1.

 so I then try to bring it up manually:

 pepin:~ # ifconfig hsi1 up
 pepin:~ # ifconfig
 eth0  Link encap:Ethernet  HWaddr 02:00:00:00:00:01
   inet addr:137.70.100.184  Mask:255.255.254.0
   inet6 addr: fe80::200:0:400:1/10 Scope:Link
   UP RUNNING MULTICAST  MTU:1492  Metric:1
   RX packets:968 errors:0 dropped:0 overruns:0 frame:0
   TX packets:361 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:100
   RX bytes:124851 (121.9 Kb)  TX bytes:46248 (45.1 Kb)
   Interrupt:3

 hsi1  Link encap:Ethernet  HWaddr 00:00:00:00:00:00
   inet6 addr: fe80::200:ff:fe00:0/10 Scope:Link
   UP RUNNING NOARP MULTICAST  MTU:8192  Metric:1
   RX packets:0 errors:0 dropped:0 overruns:0 frame:0
   TX packets:1 errors:0 dropped:0 overruns:0 carrier:0
   collisions:0 txqueuelen:100
   RX bytes:0 (0.0 b)  TX bytes:88 (88.0 b)

 loLink encap:Local Loopback
  {standard crapola cut for brevity}

This is because your interface configuration is missing, because of the
rc.config issue mentioned at the top.  You need an ifcfg-hsi1 file
somewhere under the /etc/ directory (/etc/sysconfig/network, if it's like
my SuSE 9.0 box) for your interface to be correctly configured.  You could
copy the ifcfg-eth0 file in that directory and edit as required, or you
could use YaST to set up hsi1 this time and have a look at what files it
changes for future reference.

Hope this helps...

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Initial LPAR install zSeries

2004-05-26 Thread Vic Cross
On Wed, 26 May 2004, Alan Altmark wrote:

 From Steve Nichols, HMC Architect, IBM:
 --
 Yes, it is possible to install Linux from the CD/DVD drive in the HMC. To
 do so, you use the Single Object Operations task on the HMC targetting
 the system (CPC) that you want to install on. When that window comes up,
 use the Load from CD-ROM or Server task targetting the specific Image
 (LPAR partition) to load Linux. There will probably be a README file on
 the Linux CD that gives more details on what to do from this point for
 your particular distribution.
 --

Alan, Steve is only partially correct with this.  Yes, you can use the
Load from CD-ROM or Server function on the HMC to start the installation
system in the LPAR.  Once started in this way, just the same as if a tape
IPL had been performed, that installation system has to find it's way to
the rest of the files on the CD -- which means network or local disc,
since the HMC's CD/DVD drive is not an LPAR-addressable device.

Start the installation process, yes.  Install, well, no.  Sorry to be
a pedant... ;)

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: End of 3745 need???

2004-05-26 Thread Vic Cross
On Wed, 26 May 2004, Bodra wrote:

 From IBM iSource -- U.S. Announcements
  _ 204104 IBM Communications Server for Linux V6.2 opens the door to
   independent protocol networking (25.7KB)
   http://www.ibm.com/isource/cgi-bin/goto?it=usa_annredon=204-104
  _ 204106 IBM Communications Server for Linux on zSeries opens the door
   to independent protocol networking (26.5KB)
   http://www.ibm.com/isource/cgi-bin/goto?it=usa_annredon=204-106

 Can anyone explain me if this means the RIP for old 3745?

Carlos, whether you can get rid of your 3745 and replace it with Linux --
either on zSeries or not -- will have everything to do with what you use
your 3745 for now.  There are still functions that can only be performed
in NCP (SNI is a great example) that would require a move to a different
network design to allow the 3745 to be de-commissioned.

Comms Server for Linux code is based on the AIX code, so it doesn't
provide any additional functions above what's provided on any of the
non-MVS versions of Comms Server.  It may have additional restrictions
when it comes to hardware support compared to a pSeries box (the pSeries
may have more support for 'legacy' data links), but Linux probably would
have the advantage for newer hardware (does pSeries do 802.11[abg]?)

If you haven't already, check out the IBM Communication Controller
Migration Guide Redbook (SG24-6298).  This will let you check through the
functions that you're doing with 3745s now to find out how those functions
can (or cannot) be performed in other equipment.  Only now, whenever you
read Comms Server for AIX or Comms Server for Windows, you can
substitute Comms Server for Linux.  ;)


Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Crypto on z800

2004-05-26 Thread Vic Cross
On Wed, 26 May 2004, Ann Smith wrote:

 IBM configured our PCICA card to our VM LPAR that runs linux but we see
 the following errors:
 600 May 26 08:03:24 hostl1 kernel: z90crypt: Hardware error
 601 May 26 08:03:24 hostl1 kernel: z90crypt: Type 82 Message Header:
 00821000
 602 May 26 08:03:24 hostl1 kernel: z90crypt: Device 14 returned
 hardware error 13
 603 May 26 08:03:43 hostl1 kernel: z90crypt: Error in
 get_crypto_request_buffer: 16

Not sure if I ever saw exactly this error, but I can suggest to make sure
that you have the CRYPTO APVIRT statement specified in the Linux guest's
directory entry.  You do not need the CRYPTO parameter on the CPU
statement, this is just for MVS guests to give them access to the CCF (I
think).

You should be able to issue #CP Q CRYPTO (Q VIRT CRYPTO) in your Linux
guest and see the virtual queue to the PCICA.  The same command issued
by your suitably-privileged non-G user (a real Q CRYPTO) should show you a
little bit more info.

There are dependencies in LPAR activation and definition as well:

 * The crypto can only be accessed by one LPAR at a time.  You can define
a given crypto in the activation profile of more than one LPAR, but the
first LPAR activated will get that crypto (to the exclusion of all others
that may have it in their profile).

 * PCICAs and PCICCs have CP/IFL affinity, apparently: even if you get the
above right, if the crypto's affinity is to a CP/IFL that is not defined
or available to the LPAR, the LPAR will not be able to use the crypto[1].

Cheers,
Vic Cross


[1] I may be wrong on this -- this was a piece of advice we encountered
along our way while diagnosing problems with crypto on a z800, but we
later found that the PCICA in question failed diagnostics...

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: P/390 3215

2004-05-22 Thread Vic Cross
On Sat, 22 May 2004, Alex J Burke wrote:

 Does anybody know of a fix for nothing appearing on the 3215 console when
 debian is booted? Maybe it is just the copy I have, but if networking isnt
 working AND I cant see anything makes it kinda hard to maintain.

Is this a system you previously had working, or an installation boot?  If 
it's a previously working system, I'd guess that you ran zilo/zipl and 
removed critical parameters from the boot data.

There are at least two critical kernel parameters needed to boot on a
P/390.  The first tells Linux to avoid the compulsory HSA at the end of
physical storage: if you have a 128MB card, you need mem=124M on your
parmline (without the quotes).  The other overrides the default automatic
detection of console device (which doesn't work on a P/390 without VM).  
Add condev=0009 (replacing 0009 with your real 3215 console device, and
remembering it's a decimal value, so use 0x001F or 0031 for a console at
001F).

There's some notes w.r.t. the old SuSE 7 GA at 
http://linuxvm.org/penguinvm/p390/

 Generally, what do people think is the best linux to run on a P/390? I am
 really talking free ones, so I guess thats the old Red Hat 7.2, Debian or
 the SuSE 31bit BETA that you can still download.

If you can get Debian to boot and install, I'd go with that.  It will be
the most current free one you can get other than TAO and Gentoo (which
I've not played with, but I think both need an existing driver system to
install).

 Lastly, are the Red Hat and SuSE linux versions aboe to show somehing on the
 3215 console?

Sure, just the same as they do running under VM and using the virtual
3215.  On a P/390 they just need to be told where to find it, that's all.  
;)

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: lcs devices under gentoo for s390

2004-05-18 Thread Vic Cross
On Mon, 17 May 2004, Post, Mark K wrote:

 From what I recall in the 2.4 kernels, sit0 was the interface for IPv6.  I
 wonder if I'm remembering wrong, or if this has changed.

Mark, you're correct, sit0 is the IPv6-in-IPv4 tunnel interface.

Richard, we really need to see if modules are loading, or trying to load,
etc.  I've not done a 2.6 on s390 so I can't help directly, but I do know
that the sit0 will not help you here (sorry)...

If you have had another Linux running in this system, it will definitely
help to get the values for the definition of the interface from that.
Sure, the config is not the same as 2.4, but at least you will know the
right numbers to put in places -- very important for getting the EMIO
stuff to work.

When I get some spare time I'm going to give this Gentoo build a try,
after that I might be more help...  :)

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: lcs devices under gentoo for s390

2004-05-18 Thread Vic Cross
On Tue, 18 May 2004, Richard Pinion wrote:

 During startup I do receive the message that the lcs module has been
 loaded.

That's good news -- what does the output of ifconfig -a show?  If the
lcs module is indeed successfully loaded and talking to network hardware,
you should see something in addition to your sit0...

 I've defined my ethernet adapter in /etc/ccwgroup.conf.  The
 Gentoo S/390 doc describes this as the file that replaced
 /etc/chandev.conf with the 2.6 kernel.

I know that you would have checked the syntax of this already, but it
probably wouldn't hurt to check it one more time ;)

What about /proc/subchannels and the like?  I guess they will have
equivalents in the new kernel...  Is there a /proc (or /sys) pseudofile
that is the equivalent of /proc/chandev?  These will be other helpful
places to look.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: ifl

2004-05-18 Thread Vic Cross
On Mon, 17 May 2004, Michael Morgan wrote:

 I believe that the only difference between an Integrated Facility for
 Linux (IFL) processor and a standard processor is that the IFL has a
 microcode fix to ensure that MVS systems can't use it.

A more subtle difference (or more obvious, depending on your perspective)
is that IFLs always run at full speed on machines like MP3000 and z800
that support hardware capped CPs.  I guess this still applies to z890
too...

So if you have a 7060-H30, or a 2066-0A1, if you get an IFL it will run at
normal speed and not at the speed of the CP.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: FW: VM TCPIP (A Little Off Topic)

2004-05-17 Thread Vic Cross
On Mon, 17 May 2004 [EMAIL PROTECTED] wrote:

 I think my current struggle is in understanding the steps
 needed to implement a new service machine running TCPIP and keeping them
 seperated.

Steve, I discussed this in the VSWITCH and VLAN Redpaper which you can
find at:

  http://www.redbooks.ibm.com/abstracts/redp3719.html

I'd be keen to hear what you think of it, so please let me know or use the
feedback e-mail address on the Redbooks site.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: New to Linux: where is located YaST on an installation system

2004-04-29 Thread Vic Cross
G'day,

On Thu, 29 Apr 2004, Janek Jakubek wrote:

 I booted the first time SLES-8 in an lpar from CD1.
 I can sign on as root via telnet ssh, however
 I do not seem to be able to run YaST.
 When I enter 'yast' command I'm getting:

 -bash: yast: command not found

I have had this problem when the final stages of the installation did not
complete properly.

In the last couple of days, Rob van der Heij posted about a command that
needs to be run (or sometimes does not run automatically) at the end of a
SLES setup -- perhaps this needs to be run to correctly finish the system
setup and give you access to YaST.

BTW, yast on the command line rather than Yast, YaST, YAST or
other combinations will work fine.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: VSWITCH Connections

2004-04-15 Thread Vic Cross
On Fri, 16 Apr 2004 03:18 am, David Booher wrote:
DB Hello,
DB
DB Great thread!  I'm also going to implement VSWITCH tonight, but I
 understood the PROFILE TCPIP change to be something like this:
DB
DB VSWITCH CONTROLLER ON 0104 0106
DB
DB where a range of addresses (triplet) is used.  Am I wrong?

Not wrong.  You can do this, if you really feel the need to restrict the
controller to a particular set of RDEV.

I'd suggest that VSWITCH CONTROLLER is all you need; that way, you won't get
any nasty surprises if a controller fails and the RDEV range you've specified
on a backup controller doesn't suit...  Ideally, you want any TCPIP service
machine you've defined as a controller to be able to back up any other one,
and it adds complexity if you restrict each controller to an RDEV range.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Contraction for Linux on zSeries

2004-04-13 Thread Vic Cross
On Tue, 13 Apr 2004, Mark D Pace wrote:

 LInux on Zseries hARDware

 LIZARD

in reply to Scully, William who suggested:

 LinZ, pronounced Lindsey?

Well, down here LinZ would be pronounced Lin-ZED, which gets closer to
Mark's idea...  ;)  It would probably be further contracted to Linz
(i.e. Lins).

I know that others on the list have expressed a dislike for zLinux (or
IBM-ised to z/Linux) -- why was that?  zLinux would get my vote...

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: FTEs per Server

2004-04-13 Thread Vic Cross
On Tue, 13 Apr 2004, Adam Thornton wrote:

 Mains voltage across the keyboard is more traditional.

Y'know, Adam, I sort-of picked you as a BOFH-type...

;))

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: vsftpd troubles

2004-04-06 Thread Vic Cross
On Tue, 6 Apr 2004, Phil Hodgson wrote:

 Also the output from tcpdump (tcpdump-3.7.1-158) is unlike that which I
 am used to from sles7's tcpdump (not to mention my i386 SuSE 9). It does
 not seem to be interpreting the output and only give a hex dump like
 this 12:05:03.631567 40:0:ff:6:52:b5 45:0:0:28:0:0 0af9 40:
  898a 0af9 899e 803d 0015 cc51 523f 
   5004  e7e2 

 when I am expecting something more akin to
 12:11:51.915179 11.111.111.111.domain  lnx.NNN.NN.NNN.33299: 16857 NXDomain 0/1/0 
 (109) (DF)
  Anyone know how to persuade it to give a more readable output?

Use the tcpdump-qeth wrapper, instead of invoking tcpdump directly.  In
SLES8 it should be provided, although you may have to install another
package to get it...

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Sharing root mdisk with SLES 8.0

2004-04-01 Thread Vic Cross
On Thu, 1 Apr 2004, Phil Payne wrote:

 Wouldn't one set of cache entries versus dozens or hundreds make a
 difference in a large environment?

I suppose, but for the root filesystem there's generally too much
system-unique stuff in there.  Keeping that stuff unique while making the
filesystem shareable doesn't warrant the extra work -- as Mark says, /
needs no more than about 150MB on most systems.

You can't share everything.  Filesystems like /usr or /opt can give much
greater gains for (arguably) less effort, since they can run out to 2GB or
more.



Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Sharing root mdisk with SLES 8.0

2004-04-01 Thread Vic Cross
On Thu, 1 Apr 2004, Post, Mark K wrote:

 150MB?  I said about 15MB.  I think I can fit a whole system into 150MB.  :)

Yep, I saw that in the ohnoseconds after sending my reply.  My point got
an order of magnitude stronger, though!  ;)

Cheers,
Vic

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Zebra, OSPFD, and VIPA Problems

2004-03-24 Thread Vic Cross
G'day Peter,

On Wed, 24 Mar 2004, Peter E. Abresch Jr.   - at Pepco wrote:

 when I use ifconfig eth1 up, the eth1 interface comes up but OSPFD never
 detects that it is available. Eth1 routes are never injected back into the
 routing table. When eth0 goes down, everyone drops even though eth1 is up
 and fully functional. The only method to get OSPFD to recognized an
 interface once it has gone down is to restart OSPFD which obviously is
 disruptive. Does anyone have any recommendations or ideas? Thanks.

I have found that the zebra daemon can get in a knot sometimes when it
does not set the IP address of the interfaces in question.

What I had to do for reliable OSPF operation (this is on a Red Hat 7.2
s390 system, BTW), is to remove all IP address details from the relevant
/etc/sysconfig/network-scripts/ifcfg-* files and use the ip address
syntax in the zebra.conf file to allow zebra to configure the IP address.
Since there does not seem to be a way to set physical characteristics like
MTU size in zebra.conf, I left these in ifcfg-*.

In the same theme, I've also found that using the Linux commands (like
ifconfig, route, ip, etc) on a box running zebra can be unreliable.
Zebra really likes to take control and ownership of the routing function,
and using the Linux commands to operate on interfaces and routes outside
of zebra's control can upset it.  It becomes necessary to really think of
the box as a router now, and use the router commands from the routing
daemon -- yes, this includes the famous Cisco IOS shutdown command to
deactivate an interface and its inverse to activate the interface, no
shutdown.  Use your normal method to get a zebra prompt (telnet to the
command port, or the preferred 'vtysh' program), switch to enable mode,
and away you go[1].

In your case, since you're using zebra for VIPA (so I'm assuming this is a
server system and not a network router image), it might be hard to think
of it that way...  It could be argued that either set of commands should
work; I don't know if Quagga (the routing daemon formerly known as Zebra)
will offer any relief on this.

Cheers,
Vic Cross

[1] I remember someone writing about how to set up a userid that runs
vtysh as its shell, allowing a router admin to get access to zebra without
having to have their own shell account...  Oh yeah, that's right, it was
me, in the SHARE presentation from last year that I never got around to
sending to Mark...  D'oh...  RSN...  :)

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Linux, OSAs, and VIPAs....Oh my!

2004-03-20 Thread Vic Cross
G'day Peter,

On Fri, 19 Mar 2004, Peter E. Abresch Jr.   - at Pepco wrote:

 I have 2 z/OS LPARs and 1 SuSe Linux LPAR. I have 2 OSA/e shared between
 all three of these LPARs. If I shutdown both OSA interfaces on the Linux
 LPAR using ifconfig ethX down, then the VIPA becomes unresponsive on the
 z/OS LPAR. If I start the OSA interfaces backup using ifconfig ethX up
 the VIPA becomes responsive. The VIPA that becomes unresponsive is also the
 LPAR which is the default managing LPAR for the OSAs.

I thought of two reasons this might happen.  The first, which I
discounted, was that the Linux LPAR was setting PRIROUTER on the OSA-E
ports and your VIPA traffic was actually getting to z/OS via Linux.  I
discounted this because you don't need to set PRIROUTER on z/OS anymore
(with OSA-E), because the VIPA address is one of the addresses that is
registered in the OSA when the OSA-E interface is activated.

The second thought: maybe your Linux systems are advertising (via dynamic
routing) the route to your VIPA.  I see in another message that you're
having trouble with Zebra starting, so it looks like you're using some
kind of dynamic routing...  Verify how the rest of the network learns
about the path to the addresses in your LPARs.  If you're running OMPROUTE
(or OROUTED) in z/OS make sure it's configured correctly -- perhaps Zebra
on your Linux system is advertising the z/OS routes for you; these routes
would disappear when the Linux system's network connection is removed.

Hope this helps...

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Zebra not starting at Boot Time

2004-03-20 Thread Vic Cross
G'day Peter,

On Fri, 19 Mar 2004, Peter E. Abresch Jr.   - at Pepco wrote:

 notice/etc/init.d/rc3.d/S09zebra start
 Starting routing daemon (Zebra)startproc:  signal catched /usr/sbin/zebra:
 Segmentation fault

Is Zebra being started late in the startup process?  I'd check that all of
the other network tasks/modules are loaded prior to the system trying to
start zebra.  For example SUSE systems use IPv6, so I'd guess that they
would have built IPv6 support into zebra, but if the IPv6 support is not
loaded yet then strangeness might occur...?

 OSPFD starts fine. After I log on, I can issue the start for Zebra and
 everything seems fine.

?!?!?  That's odd, I've seen strangeness when the sub-daemons are started
before zebra.

 Does anyone have any suggestions? Thanks.

When you start zebra after logging on, do you use the init script?

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


Re: Changing Ethernet adaptors

2004-03-10 Thread Vic Cross
Mike,

On Wed, 10 Mar 2004, Geiger, Mike wrote:

 Yes, LCS.  Also SLES8.  modprobe lcs and dmesg give me the same no lcs
 capable devices found messages.  This is running in an LPAR which I
 also use for a second z/OS 1.4 image.  The adaptor does work without
 issue with z/OS so I an confident that it is properly defined and
 assigned to the LPAR... at least for z/OS.

When you say it works okay in z/OS, do you mean that it works as a TCP/IP
interface and not VTAM?  Recall that the device manager in the EMIO config
is different for SNA devices than TCP/IP, so make sure that you're using
the right device manager.

Sorry if this is too obvious, it's just that the default ADCD definition
for E40 is as an SNA device; I just want to make sure you were on top of
that.

Cheers,
Vic Cross

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to [EMAIL PROTECTED] with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390


  1   2   3   >