I may have found the problem.
On VM, we have four TCP/IP stacks -- the main TCPIP for VM/CMS and three secondary
stacks (TCPIPT, TCPIPP and TCPIPD) which handle Linux. -T is for test with CTC's (so
we can cycle it without affecting production work), -P is for production with CTC's
and -D is fo
No, there have been some updates since then:
~ > uname -a
Linux 2.4.7-timer-SMP #1 SMP Tue May 21 12:58:16 GMT 2002 s390 unknown
There is a non-timer-patched version from the same date.
Mark Post
-Original Message-
From: Adam Thornton [mailto:[EMAIL PROTECTED]
Sent: Friday, July 11, 200
On the SLES8 system,
ps -ax | grep port
364 ?S 0:00 /sbin/portmap
rpcinfo -p
program vers proto port
102 tcp111 portmapper
102 udp111 portmapper
1000241 udp 1024 status
1000241 tcp 1024 status
132 udp
Gordon,
I'm assuming you have network access between the two systems. This can
be proven by pinging one server from the other.
You might consider using the 'showmount' command from the client, which
will tell you what filesystems can be exported. This is a good way to
start debugging failed NFS m
*** Reply to note of Fri, 11 Jul 2003 14:16:52 -0400 (EDT)
*** by [EMAIL PROTECTED]
rpcinfo -p should tell you if portmapper is working.
Sal
"Post, Mark K" <[EMAIL PROTECTED]> writes:
>Gordon,
>
>None of those are the portmapper:
># ps ax | grep port
> 98 ?S 0:00 /sbin/rpc.portmap
>
Gordon,
None of those are the portmapper:
# ps ax | grep port
98 ?S 0:00 /sbin/rpc.portmap
Mark Post
-Original Message-
From: Wolfe, Gordon W [mailto:[EMAIL PROTECTED]
Sent: Friday, July 11, 2003 2:12 PM
To: [EMAIL PROTECTED]
Subject: Re: Guest Lans on SLES8
Thanks Mar
Thanks Mark,
It appears to be running on both the SLES8 and SLES7 servers. ps -ax | grep rpc shows
the same on both systems:
163 ?SW 0:00 /sbin/rpc.statd
275 ?SW 0:00 [rpciod]
278 ?S 0:00 /usr/sbin/rpc.mountd
"Great Minds discuss ideas. Average minds
Gordon,
You need to be running portmapper on the local system, as well as the remote
system.
Mark Post
-Original Message-
From: Wolfe, Gordon W [mailto:[EMAIL PROTECTED]
Sent: Friday, July 11, 2003 1:57 PM
To: [EMAIL PROTECTED]
Subject: Guest Lans on SLES8
As I posted to this list yes
As I posted to this list yesterday, I have been able to convert TCPIP communications
on SLES8 under VM from VCTC over to Guest Lans.
I have Telnet, ssh, samba, ftp and apache all working happily with guest lans.
I can't seem to get nfs to mount a location on another (SLES7 with VCTC) server. Th
On Thu, 2003-07-10 at 21:21, Ted Manos wrote:
> I do not believe that the problem is SFS, or that SFS is hung. SFS
> continues to function perfectly normally when accessed from CMS. I also
> don't *think* that it is the VMNFS server, as that appears to continue to
> function normally for any/all
On the LINUX side mount the SFS(bfs) directory as rsize=1024
wsize=1024... see is is hang
Also we need you routing config... how many hops???
|-+-->
| | John Summerfield |
| | <[EMAIL PROTECTED]|
| |
Softbank Uway Selects Linux and IBM eServer Mainframe for Online University
Admissions Processing.
Korea's e-Document Leader Replaces 45 HP and Sun Servers
Korea's Softbank Uway, a leader in online university applications,
announced that it has selected Linux running on the IBM eServer z990
main
On Fri, 11 Jul 2003, Ashley Chaloner wrote:
> (Also, a particular annoyance is that processes in uninterruptible
> sleep are counted in the load average so there is a high load average
> without any load on the processor.)
loadaverage counts active processes, and if it's actively waiting on a
dev
The option to buy stock is interesting.
http://zdnet.com.com/2100-1104_2-1024633.html
On Thu, 10 Jul 2003, Ted Manos wrote:
> (cross-posted to [EMAIL PROTECTED] and [EMAIL PROTECTED])
>
>
> Hello all (particularly Alan, Romney and crew!),
>
>
> We have been doing testing with a new development version of SAS V9 for
> Linux390 for a couple months now, and had not run into any major
Ted,
I have had a similar situation mounting an NFS volume (from a Sun) to
either a RH-7.2 or a RH-RawHide VM.
The processes in question seem to be sleeping in function "down" or
"wait_on_inode". So they look like they're in uninterruptible sleep,
so they don't get scheduled, so they never receiv
Try 'mount -o nfsvers=2 '
WBR, Sergey
17 matches
Mail list logo