ZFS + NFS problems

2008-03-20 Thread Ruud Althuizen
Hello People,

I have a webserver with a ZFS pool for storing all the user data. So all the
users have their own filesystem with a quota set. With exporting the system
I ran into some problems with NFS though.

At other machines I can mount the user specific shares resulting in an
80-line fstab per machine or mount a higher dir. But that last option
results in all the dirs and files in it mapped to root.

So is there a solution to that last problem or will I need to use something
else? As the share also gets mounted with Linux machines a server-side
solution is prefered.

-- 
Greetings,
Ruud Althuizen


pgpG9kcbUFbFf.pgp
Description: PGP signature


Re: NFS Problems/Questions

2007-07-14 Thread Jason Morgan
On Sat, Jun 30, 2007 at 07:33:19PM -0400, Jason Morgan wrote:
 On Sat, Jun 23, 2007 at 07:42:24PM -0400, Jason Morgan wrote:
  On Sat, Jun 23, 2007 at 12:46:27PM -0700, Michael Smith wrote:
   Hello Jason:
   
   On Jun 23, 2007, at 9:34 AM, Jason Morgan wrote:
   
   I've been having some trouble with NFS performance for some time and
   now that class is out, I've had a bit of time to investigate but I'm
   stuck. Below are the details of my investigation. Hopefully, someone
   here can give me some advice.
   
   The basic problem is that my NFS performance is very slow. Right now,
   I am connecting two workstations to a NFS server, which has my home
   directory, etc, mounted. They are connected over a gigabit network
   (right now with mtu set to 7000, which is supported by all hardware --
   changing it to 1500 has no effect on performance, which is
   strange). Each system is running 6.2-RELEASE or -STABLE. Each system
   is also using the following network card:
   
   # ifconfig sk0
   sk0: flags=8843UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST mtu 7000
   options=bRXCSUM,TXCSUM,VLAN_MTU
   inet 10.0.0.2 netmask 0xff00 broadcast 10.0.0.255
   ether 00:17:9a:bb:05:87
   media: Ethernet autoselect (1000baseTX full- 
   duplex,flag0,flag1)
   status: active
   
   # dmesg | grep sk
   skc0: D-Link DGE-530T Gigabit Ethernet port 0xec00-0xecff mem
 0xfdff8000-0xfdffbfff irq 18 at device 10.0 on pci0
   skc0: DGE-530T Gigabit Ethernet Adapter rev. (0x9)
   sk0:  Marvell Semiconductor, Inc. Yukon on skc0
   sk0:  Ethernet address: 00:17:9a:XX:XX:XX
   
   ## Server /etc/rc.conf settings
   
   rpcbind_enable=YES
   rpc_lockd_enable=YES
   rpc_statd_enable=YES
   nfs_server_enable=YES
   nfs_server_flags=-u -t -n 12
   nfs_bufpackets=32
   mountd_flags=-r
   
   
   ## Client /etc/rc.conf settings
   
   nfs_client_enable=YES
   nfs_bufpackets=32
   nfsiod_enable=YES
   nfsiod_flags=-n 6
   rpc_lockd_enable=YES
   rpc_statd_enable=YES
   rpcbind_enable=YES
   
   ## /etc/exports
   
   /usr -alldirs,maproot=root client1 client2
   
   
   For performance benchmarking, I am using dd. Locally from the server,
   this is a representative result when writing a 1GB file:
   
   ## Local write test (for an upper-bound on what to expect).
   
   # dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
   1000+0 records in
   1000+0 records out
   1048576000 bytes transferred in 19.580184 secs (53552919 bytes/sec)
   
   Connecting from a client (both clients get approximately the same
   results).
   
   ## Remote connection (UDP), mounted in /etc/fstab as with flags:
   ## rw,-U,-3,-r=32768,-w=32768
   
   # dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
   1000+0 records in
   1000+0 records out
   1048576000 bytes transferred in 101.151139 secs (10366428 bytes/sec)
   
   ## Remote connection (TCP), mounted in /etc/fstab as with flags:
   ## rw,-T,-3,-r=32768,-w=32768
   
   # dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
   1000+0 records in
   1000+0 records out
   1048576000 bytes transferred in 59.668585 secs (17573334 bytes/sec)
   
   As can be seen above, TCP is much faster than UPD. I have tried many
   different mount settings and these are the best results I could
   get. To test whether or not I have having network issues, I
   transferred the same nfs.dat file via a http connection and got
   ~32MB/sec -- almost 2x the speed of the TCP NFS connection. 32MB/sec
   is about what I would expect given that my fastest write speed is
   ~50MB/sec.
   
   At this point I am stumped. I have tried increasing/changing the
   number of nfsiod servers as well as nfs_bufpackets. No matter what
   settings I change, the results are always the same. I get only two
   errors, first on /var/log/messages on the server I have just begun
   seeing:
   
   Jun 22 21:13:47 crichton routed[666]: sendto(dc1, 224.0.0.2):  
   Operation not permitted
   Jun 22 21:13:47 crichton routed[666]: sendto(sk0, 224.0.0.2):  
   Operation not permitted
   Jun 22 21:13:50 crichton routed[666]: sendto(dc1, 224.0.0.2):  
   Operation not permitted
   Jun 22 21:13:50 crichton routed[666]: sendto(sk0, 224.0.0.2):  
   Operation not permitted
   
   This appeared after I added a route; however, I added the route after
   many of the tests were done. I get the same results now as before the
   new route. On one of the clients (the one running 6.2-RELEASE-p1), I
   also get a nasty error:
   
   nfs/tcp clnt: Error 60 reading socket, tearing down TCP connection
   
   This cropped up last night after I tweaked some settings. They have
   now been changed back, but I still get this error. The other client is
   unaffected.
   
   I appreciate any help people can provide on tracking down the
   issues. Sorry about the long email -- just trying to be thorough. Of
   course, I've searched the Internet and can't find any clear assistence
   on these issues.
   
   Cheers,
   ~Jason
   
   We use the 

Re: NFS Problems/Questions

2007-06-30 Thread Jason Morgan
On Sat, Jun 23, 2007 at 07:42:24PM -0400, Jason Morgan wrote:
 On Sat, Jun 23, 2007 at 12:46:27PM -0700, Michael Smith wrote:
  Hello Jason:
  
  On Jun 23, 2007, at 9:34 AM, Jason Morgan wrote:
  
  I've been having some trouble with NFS performance for some time and
  now that class is out, I've had a bit of time to investigate but I'm
  stuck. Below are the details of my investigation. Hopefully, someone
  here can give me some advice.
  
  The basic problem is that my NFS performance is very slow. Right now,
  I am connecting two workstations to a NFS server, which has my home
  directory, etc, mounted. They are connected over a gigabit network
  (right now with mtu set to 7000, which is supported by all hardware --
  changing it to 1500 has no effect on performance, which is
  strange). Each system is running 6.2-RELEASE or -STABLE. Each system
  is also using the following network card:
  
  # ifconfig sk0
  sk0: flags=8843UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST mtu 7000
  options=bRXCSUM,TXCSUM,VLAN_MTU
  inet 10.0.0.2 netmask 0xff00 broadcast 10.0.0.255
  ether 00:17:9a:bb:05:87
  media: Ethernet autoselect (1000baseTX full- 
  duplex,flag0,flag1)
  status: active
  
  # dmesg | grep sk
  skc0: D-Link DGE-530T Gigabit Ethernet port 0xec00-0xecff mem
0xfdff8000-0xfdffbfff irq 18 at device 10.0 on pci0
  skc0: DGE-530T Gigabit Ethernet Adapter rev. (0x9)
  sk0:  Marvell Semiconductor, Inc. Yukon on skc0
  sk0:  Ethernet address: 00:17:9a:XX:XX:XX
  
  ## Server /etc/rc.conf settings
  
  rpcbind_enable=YES
  rpc_lockd_enable=YES
  rpc_statd_enable=YES
  nfs_server_enable=YES
  nfs_server_flags=-u -t -n 12
  nfs_bufpackets=32
  mountd_flags=-r
  
  
  ## Client /etc/rc.conf settings
  
  nfs_client_enable=YES
  nfs_bufpackets=32
  nfsiod_enable=YES
  nfsiod_flags=-n 6
  rpc_lockd_enable=YES
  rpc_statd_enable=YES
  rpcbind_enable=YES
  
  ## /etc/exports
  
  /usr -alldirs,maproot=root client1 client2
  
  
  For performance benchmarking, I am using dd. Locally from the server,
  this is a representative result when writing a 1GB file:
  
  ## Local write test (for an upper-bound on what to expect).
  
  # dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
  1000+0 records in
  1000+0 records out
  1048576000 bytes transferred in 19.580184 secs (53552919 bytes/sec)
  
  Connecting from a client (both clients get approximately the same
  results).
  
  ## Remote connection (UDP), mounted in /etc/fstab as with flags:
  ## rw,-U,-3,-r=32768,-w=32768
  
  # dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
  1000+0 records in
  1000+0 records out
  1048576000 bytes transferred in 101.151139 secs (10366428 bytes/sec)
  
  ## Remote connection (TCP), mounted in /etc/fstab as with flags:
  ## rw,-T,-3,-r=32768,-w=32768
  
  # dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
  1000+0 records in
  1000+0 records out
  1048576000 bytes transferred in 59.668585 secs (17573334 bytes/sec)
  
  As can be seen above, TCP is much faster than UPD. I have tried many
  different mount settings and these are the best results I could
  get. To test whether or not I have having network issues, I
  transferred the same nfs.dat file via a http connection and got
  ~32MB/sec -- almost 2x the speed of the TCP NFS connection. 32MB/sec
  is about what I would expect given that my fastest write speed is
  ~50MB/sec.
  
  At this point I am stumped. I have tried increasing/changing the
  number of nfsiod servers as well as nfs_bufpackets. No matter what
  settings I change, the results are always the same. I get only two
  errors, first on /var/log/messages on the server I have just begun
  seeing:
  
  Jun 22 21:13:47 crichton routed[666]: sendto(dc1, 224.0.0.2):  
  Operation not permitted
  Jun 22 21:13:47 crichton routed[666]: sendto(sk0, 224.0.0.2):  
  Operation not permitted
  Jun 22 21:13:50 crichton routed[666]: sendto(dc1, 224.0.0.2):  
  Operation not permitted
  Jun 22 21:13:50 crichton routed[666]: sendto(sk0, 224.0.0.2):  
  Operation not permitted
  
  This appeared after I added a route; however, I added the route after
  many of the tests were done. I get the same results now as before the
  new route. On one of the clients (the one running 6.2-RELEASE-p1), I
  also get a nasty error:
  
  nfs/tcp clnt: Error 60 reading socket, tearing down TCP connection
  
  This cropped up last night after I tweaked some settings. They have
  now been changed back, but I still get this error. The other client is
  unaffected.
  
  I appreciate any help people can provide on tracking down the
  issues. Sorry about the long email -- just trying to be thorough. Of
  course, I've searched the Internet and can't find any clear assistence
  on these issues.
  
  Cheers,
  ~Jason
  
  We use the following settings on a mail cluster that's pushing about  
  50 MB/sec sustained.
  
  10.211.1.213:/m0/mail/m0nfs  
  rw,tcp,intr,noatime,nfsv3,-w=65536,-r=65536
  
  # NFS 

NFS Problems/Questions

2007-06-23 Thread Jason Morgan
I've been having some trouble with NFS performance for some time and
now that class is out, I've had a bit of time to investigate but I'm
stuck. Below are the details of my investigation. Hopefully, someone
here can give me some advice.

The basic problem is that my NFS performance is very slow. Right now,
I am connecting two workstations to a NFS server, which has my home
directory, etc, mounted. They are connected over a gigabit network
(right now with mtu set to 7000, which is supported by all hardware --
changing it to 1500 has no effect on performance, which is
strange). Each system is running 6.2-RELEASE or -STABLE. Each system
is also using the following network card:

# ifconfig sk0
sk0: flags=8843UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST mtu 7000
options=bRXCSUM,TXCSUM,VLAN_MTU
inet 10.0.0.2 netmask 0xff00 broadcast 10.0.0.255
ether 00:17:9a:bb:05:87
media: Ethernet autoselect (1000baseTX full-duplex,flag0,flag1)
status: active

# dmesg | grep sk
skc0: D-Link DGE-530T Gigabit Ethernet port 0xec00-0xecff mem 
  0xfdff8000-0xfdffbfff irq 18 at device 10.0 on pci0
skc0: DGE-530T Gigabit Ethernet Adapter rev. (0x9)
sk0:  Marvell Semiconductor, Inc. Yukon on skc0
sk0:  Ethernet address: 00:17:9a:XX:XX:XX

## Server /etc/rc.conf settings

rpcbind_enable=YES
rpc_lockd_enable=YES
rpc_statd_enable=YES
nfs_server_enable=YES
nfs_server_flags=-u -t -n 12
nfs_bufpackets=32
mountd_flags=-r


## Client /etc/rc.conf settings

nfs_client_enable=YES
nfs_bufpackets=32
nfsiod_enable=YES
nfsiod_flags=-n 6
rpc_lockd_enable=YES
rpc_statd_enable=YES
rpcbind_enable=YES

## /etc/exports

/usr -alldirs,maproot=root client1 client2


For performance benchmarking, I am using dd. Locally from the server,
this is a representative result when writing a 1GB file:

## Local write test (for an upper-bound on what to expect).

# dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 19.580184 secs (53552919 bytes/sec)

Connecting from a client (both clients get approximately the same
results).

## Remote connection (UDP), mounted in /etc/fstab as with flags:
## rw,-U,-3,-r=32768,-w=32768

# dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 101.151139 secs (10366428 bytes/sec)

## Remote connection (TCP), mounted in /etc/fstab as with flags:
## rw,-T,-3,-r=32768,-w=32768

# dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 59.668585 secs (17573334 bytes/sec)

As can be seen above, TCP is much faster than UPD. I have tried many
different mount settings and these are the best results I could
get. To test whether or not I have having network issues, I
transferred the same nfs.dat file via a http connection and got
~32MB/sec -- almost 2x the speed of the TCP NFS connection. 32MB/sec
is about what I would expect given that my fastest write speed is
~50MB/sec.

At this point I am stumped. I have tried increasing/changing the
number of nfsiod servers as well as nfs_bufpackets. No matter what
settings I change, the results are always the same. I get only two
errors, first on /var/log/messages on the server I have just begun
seeing:

Jun 22 21:13:47 crichton routed[666]: sendto(dc1, 224.0.0.2): Operation not 
permitted
Jun 22 21:13:47 crichton routed[666]: sendto(sk0, 224.0.0.2): Operation not 
permitted
Jun 22 21:13:50 crichton routed[666]: sendto(dc1, 224.0.0.2): Operation not 
permitted
Jun 22 21:13:50 crichton routed[666]: sendto(sk0, 224.0.0.2): Operation not 
permitted

This appeared after I added a route; however, I added the route after
many of the tests were done. I get the same results now as before the
new route. On one of the clients (the one running 6.2-RELEASE-p1), I
also get a nasty error:

nfs/tcp clnt: Error 60 reading socket, tearing down TCP connection

This cropped up last night after I tweaked some settings. They have
now been changed back, but I still get this error. The other client is
unaffected.

I appreciate any help people can provide on tracking down the
issues. Sorry about the long email -- just trying to be thorough. Of
course, I've searched the Internet and can't find any clear assistence
on these issues.

Cheers,
~Jason
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: NFS Problems/Questions

2007-06-23 Thread Michael Smith

Hello Jason:

On Jun 23, 2007, at 9:34 AM, Jason Morgan wrote:


I've been having some trouble with NFS performance for some time and
now that class is out, I've had a bit of time to investigate but I'm
stuck. Below are the details of my investigation. Hopefully, someone
here can give me some advice.

The basic problem is that my NFS performance is very slow. Right now,
I am connecting two workstations to a NFS server, which has my home
directory, etc, mounted. They are connected over a gigabit network
(right now with mtu set to 7000, which is supported by all hardware --
changing it to 1500 has no effect on performance, which is
strange). Each system is running 6.2-RELEASE or -STABLE. Each system
is also using the following network card:

# ifconfig sk0
sk0: flags=8843UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST mtu 7000
options=bRXCSUM,TXCSUM,VLAN_MTU
inet 10.0.0.2 netmask 0xff00 broadcast 10.0.0.255
ether 00:17:9a:bb:05:87
media: Ethernet autoselect (1000baseTX full- 
duplex,flag0,flag1)

status: active

# dmesg | grep sk
skc0: D-Link DGE-530T Gigabit Ethernet port 0xec00-0xecff mem
  0xfdff8000-0xfdffbfff irq 18 at device 10.0 on pci0
skc0: DGE-530T Gigabit Ethernet Adapter rev. (0x9)
sk0:  Marvell Semiconductor, Inc. Yukon on skc0
sk0:  Ethernet address: 00:17:9a:XX:XX:XX

## Server /etc/rc.conf settings

rpcbind_enable=YES
rpc_lockd_enable=YES
rpc_statd_enable=YES
nfs_server_enable=YES
nfs_server_flags=-u -t -n 12
nfs_bufpackets=32
mountd_flags=-r


## Client /etc/rc.conf settings

nfs_client_enable=YES
nfs_bufpackets=32
nfsiod_enable=YES
nfsiod_flags=-n 6
rpc_lockd_enable=YES
rpc_statd_enable=YES
rpcbind_enable=YES

## /etc/exports

/usr -alldirs,maproot=root client1 client2


For performance benchmarking, I am using dd. Locally from the server,
this is a representative result when writing a 1GB file:

## Local write test (for an upper-bound on what to expect).

# dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 19.580184 secs (53552919 bytes/sec)

Connecting from a client (both clients get approximately the same
results).

## Remote connection (UDP), mounted in /etc/fstab as with flags:
## rw,-U,-3,-r=32768,-w=32768

# dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 101.151139 secs (10366428 bytes/sec)

## Remote connection (TCP), mounted in /etc/fstab as with flags:
## rw,-T,-3,-r=32768,-w=32768

# dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 59.668585 secs (17573334 bytes/sec)

As can be seen above, TCP is much faster than UPD. I have tried many
different mount settings and these are the best results I could
get. To test whether or not I have having network issues, I
transferred the same nfs.dat file via a http connection and got
~32MB/sec -- almost 2x the speed of the TCP NFS connection. 32MB/sec
is about what I would expect given that my fastest write speed is
~50MB/sec.

At this point I am stumped. I have tried increasing/changing the
number of nfsiod servers as well as nfs_bufpackets. No matter what
settings I change, the results are always the same. I get only two
errors, first on /var/log/messages on the server I have just begun
seeing:

Jun 22 21:13:47 crichton routed[666]: sendto(dc1, 224.0.0.2):  
Operation not permitted
Jun 22 21:13:47 crichton routed[666]: sendto(sk0, 224.0.0.2):  
Operation not permitted
Jun 22 21:13:50 crichton routed[666]: sendto(dc1, 224.0.0.2):  
Operation not permitted
Jun 22 21:13:50 crichton routed[666]: sendto(sk0, 224.0.0.2):  
Operation not permitted


This appeared after I added a route; however, I added the route after
many of the tests were done. I get the same results now as before the
new route. On one of the clients (the one running 6.2-RELEASE-p1), I
also get a nasty error:

nfs/tcp clnt: Error 60 reading socket, tearing down TCP connection

This cropped up last night after I tweaked some settings. They have
now been changed back, but I still get this error. The other client is
unaffected.

I appreciate any help people can provide on tracking down the
issues. Sorry about the long email -- just trying to be thorough. Of
course, I've searched the Internet and can't find any clear assistence
on these issues.

Cheers,
~Jason

We use the following settings on a mail cluster that's pushing about  
50 MB/sec sustained.


10.211.1.213:/m0/mail/m0nfs  
rw,tcp,intr,noatime,nfsv3,-w=65536,-r=65536


# NFS Server
rpcbind_enable=YES
rpc_lockd_enable=YES
rpc_statd_enable=YES
nfs_server_enable=YES
nfs_server_flags=-u -t -n 16 -h 10.211.1.213
mountd_flags=-r

I would imagine the larger read/write values above would be fine for  
you as well, given you have Gigabit links.  The 'noatime' setting may  
be problematic depending on your application.  You might want to  
Google specifics on what 

Re: NFS Problems/Questions

2007-06-23 Thread Jason Morgan
On Sat, Jun 23, 2007 at 12:46:27PM -0700, Michael Smith wrote:
 Hello Jason:
 
 On Jun 23, 2007, at 9:34 AM, Jason Morgan wrote:
 
 I've been having some trouble with NFS performance for some time and
 now that class is out, I've had a bit of time to investigate but I'm
 stuck. Below are the details of my investigation. Hopefully, someone
 here can give me some advice.
 
 The basic problem is that my NFS performance is very slow. Right now,
 I am connecting two workstations to a NFS server, which has my home
 directory, etc, mounted. They are connected over a gigabit network
 (right now with mtu set to 7000, which is supported by all hardware --
 changing it to 1500 has no effect on performance, which is
 strange). Each system is running 6.2-RELEASE or -STABLE. Each system
 is also using the following network card:
 
 # ifconfig sk0
 sk0: flags=8843UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST mtu 7000
 options=bRXCSUM,TXCSUM,VLAN_MTU
 inet 10.0.0.2 netmask 0xff00 broadcast 10.0.0.255
 ether 00:17:9a:bb:05:87
 media: Ethernet autoselect (1000baseTX full- 
 duplex,flag0,flag1)
 status: active
 
 # dmesg | grep sk
 skc0: D-Link DGE-530T Gigabit Ethernet port 0xec00-0xecff mem
   0xfdff8000-0xfdffbfff irq 18 at device 10.0 on pci0
 skc0: DGE-530T Gigabit Ethernet Adapter rev. (0x9)
 sk0:  Marvell Semiconductor, Inc. Yukon on skc0
 sk0:  Ethernet address: 00:17:9a:XX:XX:XX
 
 ## Server /etc/rc.conf settings
 
 rpcbind_enable=YES
 rpc_lockd_enable=YES
 rpc_statd_enable=YES
 nfs_server_enable=YES
 nfs_server_flags=-u -t -n 12
 nfs_bufpackets=32
 mountd_flags=-r
 
 
 ## Client /etc/rc.conf settings
 
 nfs_client_enable=YES
 nfs_bufpackets=32
 nfsiod_enable=YES
 nfsiod_flags=-n 6
 rpc_lockd_enable=YES
 rpc_statd_enable=YES
 rpcbind_enable=YES
 
 ## /etc/exports
 
 /usr -alldirs,maproot=root client1 client2
 
 
 For performance benchmarking, I am using dd. Locally from the server,
 this is a representative result when writing a 1GB file:
 
 ## Local write test (for an upper-bound on what to expect).
 
 # dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
 1000+0 records in
 1000+0 records out
 1048576000 bytes transferred in 19.580184 secs (53552919 bytes/sec)
 
 Connecting from a client (both clients get approximately the same
 results).
 
 ## Remote connection (UDP), mounted in /etc/fstab as with flags:
 ## rw,-U,-3,-r=32768,-w=32768
 
 # dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
 1000+0 records in
 1000+0 records out
 1048576000 bytes transferred in 101.151139 secs (10366428 bytes/sec)
 
 ## Remote connection (TCP), mounted in /etc/fstab as with flags:
 ## rw,-T,-3,-r=32768,-w=32768
 
 # dd if=/dev/zero of=./nfs.dat bs=1024k count=1000
 1000+0 records in
 1000+0 records out
 1048576000 bytes transferred in 59.668585 secs (17573334 bytes/sec)
 
 As can be seen above, TCP is much faster than UPD. I have tried many
 different mount settings and these are the best results I could
 get. To test whether or not I have having network issues, I
 transferred the same nfs.dat file via a http connection and got
 ~32MB/sec -- almost 2x the speed of the TCP NFS connection. 32MB/sec
 is about what I would expect given that my fastest write speed is
 ~50MB/sec.
 
 At this point I am stumped. I have tried increasing/changing the
 number of nfsiod servers as well as nfs_bufpackets. No matter what
 settings I change, the results are always the same. I get only two
 errors, first on /var/log/messages on the server I have just begun
 seeing:
 
 Jun 22 21:13:47 crichton routed[666]: sendto(dc1, 224.0.0.2):  
 Operation not permitted
 Jun 22 21:13:47 crichton routed[666]: sendto(sk0, 224.0.0.2):  
 Operation not permitted
 Jun 22 21:13:50 crichton routed[666]: sendto(dc1, 224.0.0.2):  
 Operation not permitted
 Jun 22 21:13:50 crichton routed[666]: sendto(sk0, 224.0.0.2):  
 Operation not permitted
 
 This appeared after I added a route; however, I added the route after
 many of the tests were done. I get the same results now as before the
 new route. On one of the clients (the one running 6.2-RELEASE-p1), I
 also get a nasty error:
 
 nfs/tcp clnt: Error 60 reading socket, tearing down TCP connection
 
 This cropped up last night after I tweaked some settings. They have
 now been changed back, but I still get this error. The other client is
 unaffected.
 
 I appreciate any help people can provide on tracking down the
 issues. Sorry about the long email -- just trying to be thorough. Of
 course, I've searched the Internet and can't find any clear assistence
 on these issues.
 
 Cheers,
 ~Jason
 
 We use the following settings on a mail cluster that's pushing about  
 50 MB/sec sustained.
 
 10.211.1.213:/m0/mail/m0nfs  
 rw,tcp,intr,noatime,nfsv3,-w=65536,-r=65536
 
 # NFS Server
 rpcbind_enable=YES
 rpc_lockd_enable=YES
 rpc_statd_enable=YES
 nfs_server_enable=YES
 nfs_server_flags=-u -t -n 16 -h 10.211.1.213
 mountd_flags=-r
 
 I would imagine the larger read/write 

NFS Problems

2006-12-05 Thread Don O'Neil
I'm all of a sudden having this error pop up:

NFSPROC_NULL: RPC: Timed out

Both servers have talked with each other before, and I just rebooted them
both... What could be going on?

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: NFS problems!

2006-10-05 Thread Matthew King
Anders Troback [EMAIL PROTECTED] writes:

 Hi,

 I'm having some problems with NFS lately!

 NFS server FreeBSD 6.1-RELESE
 NFS client FreeBSD 6.2-PRERELASE (STABLE)

 I'm using NFS to serve /home via amd but sometimes programs hangs and
 not even kill -9 will work. I have to restart rpc.lockd, rpc.statd and
 nfsd to get rid of the programs! If one program hangs many will follow
 and some will not start. Programs like Citrix Client Manager
 (wfcmgr), konqueror, konsole and gftp are all examples on programs that
 don't start. Sins 6.2-PRERELEASE sometimes wfcmgr don't start even if
 there are no programs hanging!

You're lucky. I can't even get X to load and the whole vfs won't shut
down cleanly once I've tried.

I have yet to try other FreeBSD releases though.

Matthew

-- 
I must take issue with the term a mere child, for it has been my
invariable experience that the company of a mere child is infinitely
preferable to that of a mere adult.
   --  Fran Lebowitz

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: NFS problems!

2006-10-04 Thread Lowell Gilbert
Anders Troback [EMAIL PROTECTED] writes:

 I'm having some problems with NFS lately!

 NFS server FreeBSD 6.1-RELESE
 NFS client FreeBSD 6.2-PRERELASE (STABLE)

 I'm using NFS to serve /home via amd but sometimes programs hangs and
 not even kill -9 will work. I have to restart rpc.lockd, rpc.statd and
 nfsd to get rid of the programs! If one program hangs many will follow
 and some will not start. Programs like Citrix Client Manager
 (wfcmgr), konqueror, konsole and gftp are all examples on programs that
 don't start. Sins 6.2-PRERELEASE sometimes wfcmgr don't start even if
 there are no programs hanging!

 This problem first occurred in 6.1-STABLE but disappear in 6.1-RELEASE
 and since 6.1-RELEASE-p4 (not 100% sure if it was p4 or p5) it's back!

 Does anyone have any idea what's going on here?

There have been some discussions of locking problems; see the -net
list.  In this case, though, I would tend to go with a soft mount
anyway, which might reduce the symptoms considerably.
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


NFS problems!

2006-10-02 Thread Anders Troback
Hi,

I'm having some problems with NFS lately!

NFS server FreeBSD 6.1-RELESE
NFS client FreeBSD 6.2-PRERELASE (STABLE)

I'm using NFS to serve /home via amd but sometimes programs hangs and
not even kill -9 will work. I have to restart rpc.lockd, rpc.statd and
nfsd to get rid of the programs! If one program hangs many will follow
and some will not start. Programs like Citrix Client Manager
(wfcmgr), konqueror, konsole and gftp are all examples on programs that
don't start. Sins 6.2-PRERELEASE sometimes wfcmgr don't start even if
there are no programs hanging!

This problem first occurred in 6.1-STABLE but disappear in 6.1-RELEASE
and since 6.1-RELEASE-p4 (not 100% sure if it was p4 or p5) it's back!

Does anyone have any idea what's going on here?

Thanks!


Regards,
Anders Trobäck
Sweden
-- 


How many Microsoft employees does it take to screw in a light bulb?
None, they declare darkness a new standard.

Anders Trobäck
http://www.troback.com/
-
___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Weird NFS problems

2005-05-31 Thread Skylar Thompson

Jon Dama wrote:


Try switching to TCP NFS.

a 100MBit interface cannot keep up with a 1GBit interface in a bridge
configuration.  Therefore, in the long run, at full-bore you'd expect to
drop 9 out of every 10 ethernet frames.

MTU is 1500 therefore 1K works (it fits in one frame), 2K doesn't (your
NFS transactions are split across frames, one of which will almost
certainly be dropped, it's UDP so the loss of one frame invalidates the
whole transaction).

This is the same reason you can't use UDP with a block size greater than
MTU to use NFS over your DSL or some such arrangement.

Incidentially, this has nothing to do with FreeBSD.  So if using TCP
mounts solves your problem, don't expect Solaris NFS to magically make the
UDP case work...
 



The thing is that UDP NFS has been working for us for years. A big part 
of our work is performance analysis, so to change our network 
architecture will invalidate a large part of our data.


--
-- Skylar Thompson ([EMAIL PROTECTED])
-- http://www.cs.earlham.edu/~skylar/



signature.asc
Description: OpenPGP digital signature


Re: Weird NFS problems

2005-05-31 Thread Jon Dama
Yes, but surely you weren't bridging gigabit and 100Mbit before?

Did you try my suggestion about binding the IP address of the NFS server
to the 100Mbit side?

-Jon

On Tue, 31 May 2005, Skylar Thompson wrote:

 Jon Dama wrote:

 Try switching to TCP NFS.
 
 a 100MBit interface cannot keep up with a 1GBit interface in a bridge
 configuration.  Therefore, in the long run, at full-bore you'd expect to
 drop 9 out of every 10 ethernet frames.
 
 MTU is 1500 therefore 1K works (it fits in one frame), 2K doesn't (your
 NFS transactions are split across frames, one of which will almost
 certainly be dropped, it's UDP so the loss of one frame invalidates the
 whole transaction).
 
 This is the same reason you can't use UDP with a block size greater than
 MTU to use NFS over your DSL or some such arrangement.
 
 Incidentially, this has nothing to do with FreeBSD.  So if using TCP
 mounts solves your problem, don't expect Solaris NFS to magically make the
 UDP case work...
 
 

 The thing is that UDP NFS has been working for us for years. A big part
 of our work is performance analysis, so to change our network
 architecture will invalidate a large part of our data.

 --
 -- Skylar Thompson ([EMAIL PROTECTED])
 -- http://www.cs.earlham.edu/~skylar/


___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Weird NFS problems

2005-05-31 Thread Skylar Thompson

Jon Dama wrote:


Yes, but surely you weren't bridging gigabit and 100Mbit before?
 


Did you try my suggestion about binding the IP address of the NFS server
to the 100Mbit side?
 



Yeah. Unfortunately networking on the server fell apart when I did that. 
Traffic was still passed and I could get through to the server on the 
100Mb/s side, but not on the 1000Mb/s. It looked like the arp tables 
weren't being forwarded properly, but I couldn't convince FreeBSD to do 
proxy arp.


After doing some more poking around, it actually looks like it might be 
a misfeature in the Linux 2.4 kernel wrt ipfilter (which is running on 
the bridge). Apparently 2.4 fragments UDP packets in the reverse order 
that every other UNIX-like operating system does, which throws off 
ipfilter's state tables. I'm going to do some testing to see if the 
difference between UDP and TCP NFS is negligible enough for us to disregard.


Thanks for the suggestions!

--
-- Skylar Thompson ([EMAIL PROTECTED])
-- http://www.cs.earlham.edu/~skylar/



signature.asc
Description: OpenPGP digital signature


Re: Weird NFS problems

2005-05-28 Thread Jon Dama
Oh, something else to try:

I checked through my notes and discovered that I had gotten UDP to work in
a similar configuration before.  What I did was bind the IP address to
fxp0 instead of em0.  By doing this, the kernel seems to send the data at
a pace suitable for the slow interface.



-Jon

On Fri, 27 May 2005, Don Lewis wrote:

 On 26 May, Skylar Thompson wrote:
  I'm having some problems with NFS serving on a FreeBSD 5.4-RELEASE
  machine. The FreeBSD machine is the NFS/NIS server for a group of four
  Linux clusters. The network archictecture looks like this:
 
  234/24   234/24
  Cluster 1 ---|--- Cluster 3
 | ---
  em0|  File server | fxp0
 |  --
  Cluster 2 ---|--- Cluster 4
  234/24230/24
 
 
  em0 and fxp0 are bridged, and em0 has a 234/24 IP address while fxp0 is
  just in promiscuous mode. 234/24 is an 802.1q VLAN on the fxp0 side of
  the server, so packets are untagged at the switch just before fxp0, and
  are forwarded to em0 through the bridge.
 
  The problem manifests itself in large UDP NFS requests from Clusters 3
  and 4. The export can be mounted fine from both those clusters, and
  small transfers such as with ls work fine, but the moment any serious
  data transfer starts, the entire mount just hangs. Running ethereal on
  the file server shows a a lot of fragmented packets, and RPC
  retransmissions on just a single request. Reducing the read and write
  NFS buffers on the Linux clients to 1kB from the default of 4kB solves
  the issue, but kills the transfer rate. The moment I go to 2kB, the
  problem reappearss. Clusters 1 and 2 use the default of 4kB buffers, and
  have no problems communicating to em0.
 
  Poking through the list archives, I ran across this message
  (http://lists.freebsd.org/pipermail/freebsd-stable/2003-May/001007.html)
  that reveals a bug in the fxp(4) driver in 4-RELEASE that incorrectly
  detects the capabilities of the NIC. Is this still an issue in
  5-RELEASE, or am I looking at a different problem? Any ideas on how I
  can get the NFS buffers up to a reasonable level?

 That problem was fixed quite some time ago.

 Which transfer direction fails?
   Client writing to server
   Client reading from server
   Both?

 Do you see all the fragments in the retransmitted request?

 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to [EMAIL PROTECTED]

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Weird NFS problems

2005-05-27 Thread Don Lewis
On 26 May, Skylar Thompson wrote:
 I'm having some problems with NFS serving on a FreeBSD 5.4-RELEASE 
 machine. The FreeBSD machine is the NFS/NIS server for a group of four 
 Linux clusters. The network archictecture looks like this:
 
 234/24   234/24
 Cluster 1 ---|--- Cluster 3
| ---
 em0|  File server | fxp0
|  --
 Cluster 2 ---|--- Cluster 4
 234/24230/24
 
 
 em0 and fxp0 are bridged, and em0 has a 234/24 IP address while fxp0 is 
 just in promiscuous mode. 234/24 is an 802.1q VLAN on the fxp0 side of 
 the server, so packets are untagged at the switch just before fxp0, and 
 are forwarded to em0 through the bridge.
 
 The problem manifests itself in large UDP NFS requests from Clusters 3 
 and 4. The export can be mounted fine from both those clusters, and 
 small transfers such as with ls work fine, but the moment any serious 
 data transfer starts, the entire mount just hangs. Running ethereal on 
 the file server shows a a lot of fragmented packets, and RPC 
 retransmissions on just a single request. Reducing the read and write 
 NFS buffers on the Linux clients to 1kB from the default of 4kB solves 
 the issue, but kills the transfer rate. The moment I go to 2kB, the 
 problem reappearss. Clusters 1 and 2 use the default of 4kB buffers, and 
 have no problems communicating to em0.
 
 Poking through the list archives, I ran across this message 
 (http://lists.freebsd.org/pipermail/freebsd-stable/2003-May/001007.html) 
 that reveals a bug in the fxp(4) driver in 4-RELEASE that incorrectly 
 detects the capabilities of the NIC. Is this still an issue in 
 5-RELEASE, or am I looking at a different problem? Any ideas on how I 
 can get the NFS buffers up to a reasonable level?

That problem was fixed quite some time ago.

Which transfer direction fails?
Client writing to server
Client reading from server
Both?

Do you see all the fragments in the retransmitted request?

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: Weird NFS problems

2005-05-27 Thread Jon Dama
Try switching to TCP NFS.

a 100MBit interface cannot keep up with a 1GBit interface in a bridge
configuration.  Therefore, in the long run, at full-bore you'd expect to
drop 9 out of every 10 ethernet frames.

MTU is 1500 therefore 1K works (it fits in one frame), 2K doesn't (your
NFS transactions are split across frames, one of which will almost
certainly be dropped, it's UDP so the loss of one frame invalidates the
whole transaction).

This is the same reason you can't use UDP with a block size greater than
MTU to use NFS over your DSL or some such arrangement.

Incidentially, this has nothing to do with FreeBSD.  So if using TCP
mounts solves your problem, don't expect Solaris NFS to magically make the
UDP case work...

-Jon

On Fri, 27 May 2005, Don Lewis wrote:

 On 26 May, Skylar Thompson wrote:
  I'm having some problems with NFS serving on a FreeBSD 5.4-RELEASE
  machine. The FreeBSD machine is the NFS/NIS server for a group of four
  Linux clusters. The network archictecture looks like this:
 
  234/24   234/24
  Cluster 1 ---|--- Cluster 3
 | ---
  em0|  File server | fxp0
 |  --
  Cluster 2 ---|--- Cluster 4
  234/24230/24
 
 
  em0 and fxp0 are bridged, and em0 has a 234/24 IP address while fxp0 is
  just in promiscuous mode. 234/24 is an 802.1q VLAN on the fxp0 side of
  the server, so packets are untagged at the switch just before fxp0, and
  are forwarded to em0 through the bridge.
 
  The problem manifests itself in large UDP NFS requests from Clusters 3
  and 4. The export can be mounted fine from both those clusters, and
  small transfers such as with ls work fine, but the moment any serious
  data transfer starts, the entire mount just hangs. Running ethereal on
  the file server shows a a lot of fragmented packets, and RPC
  retransmissions on just a single request. Reducing the read and write
  NFS buffers on the Linux clients to 1kB from the default of 4kB solves
  the issue, but kills the transfer rate. The moment I go to 2kB, the
  problem reappearss. Clusters 1 and 2 use the default of 4kB buffers, and
  have no problems communicating to em0.
 
  Poking through the list archives, I ran across this message
  (http://lists.freebsd.org/pipermail/freebsd-stable/2003-May/001007.html)
  that reveals a bug in the fxp(4) driver in 4-RELEASE that incorrectly
  detects the capabilities of the NIC. Is this still an issue in
  5-RELEASE, or am I looking at a different problem? Any ideas on how I
  can get the NFS buffers up to a reasonable level?

 That problem was fixed quite some time ago.

 Which transfer direction fails?
   Client writing to server
   Client reading from server
   Both?

 Do you see all the fragments in the retransmitted request?

 ___
 freebsd-stable@freebsd.org mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-stable
 To unsubscribe, send any mail to [EMAIL PROTECTED]

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Weird NFS problems

2005-05-26 Thread Skylar Thompson
I'm having some problems with NFS serving on a FreeBSD 5.4-RELEASE 
machine. The FreeBSD machine is the NFS/NIS server for a group of four 
Linux clusters. The network archictecture looks like this:


234/24   234/24
Cluster 1 ---|--- Cluster 3
  | ---
   em0|  File server | fxp0
  |  --
Cluster 2 ---|--- Cluster 4
234/24230/24


em0 and fxp0 are bridged, and em0 has a 234/24 IP address while fxp0 is 
just in promiscuous mode. 234/24 is an 802.1q VLAN on the fxp0 side of 
the server, so packets are untagged at the switch just before fxp0, and 
are forwarded to em0 through the bridge.


The problem manifests itself in large UDP NFS requests from Clusters 3 
and 4. The export can be mounted fine from both those clusters, and 
small transfers such as with ls work fine, but the moment any serious 
data transfer starts, the entire mount just hangs. Running ethereal on 
the file server shows a a lot of fragmented packets, and RPC 
retransmissions on just a single request. Reducing the read and write 
NFS buffers on the Linux clients to 1kB from the default of 4kB solves 
the issue, but kills the transfer rate. The moment I go to 2kB, the 
problem reappearss. Clusters 1 and 2 use the default of 4kB buffers, and 
have no problems communicating to em0.


Poking through the list archives, I ran across this message 
(http://lists.freebsd.org/pipermail/freebsd-stable/2003-May/001007.html) 
that reveals a bug in the fxp(4) driver in 4-RELEASE that incorrectly 
detects the capabilities of the NIC. Is this still an issue in 
5-RELEASE, or am I looking at a different problem? Any ideas on how I 
can get the NFS buffers up to a reasonable level?


--
-- Skylar Thompson ([EMAIL PROTECTED])
-- http://www.cs.earlham.edu/~skylar/



signature.asc
Description: OpenPGP digital signature


NFS problems

2004-11-05 Thread Karel Miklav
I have a FreeBSD 5.3RC1 and Mandrake 10 Connected over NFS. Server is on 
Mandrake and FreeBSD is only client. Transfer rate for files is great, 
but scanning folders on the NFS mount is ubearably slow. CPU usage on 
both machines is close to 0%, on Mandrake I can see a nfsd daemon or two 
fired up from time to time, client is waiting in the loop. What could be 
wrong?

Regards,
Karel
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: NFS problems

2004-11-05 Thread Jakob Breivik Grimstveit
Karel Miklav wrote:

 I have a FreeBSD 5.3RC1 and Mandrake 10 Connected over NFS. Server is on 
 Mandrake and FreeBSD is only client. Transfer rate for files is great, 
 but scanning folders on the NFS mount is ubearably slow. CPU usage on 
 both machines is close to 0%, on Mandrake I can see a nfsd daemon or two 
 fired up from time to time, client is waiting in the loop.

Might it be that you are having issues with different MTU size?

-- 
Jakob Breivik Grimstveit, http://www.grimstveit.no/jakob, +47 48298152

Bruk Newsergalleriet! No på http://www.newsergalleriet.no/
Treng du noko på CD?: http://www.grimstveit.no/jakob/burncd_no

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


NFS problems

2003-07-09 Thread Mark Woodson
I'm having trouble getting NFS exports to work properly on FreeBSD.  I'm 
running 4.8-STABLE as of July 3.

It's acting like the problem is in /etc/exports.

squelcher# less /etc/exports
/usr/src-ro -maproot=0  cad1 squelcher lappy
/usr/obj-ro -maproot=0  cad1 squelcher lappy

/usr/src will export and is mountable
/usr/obj is not

squelcher# ls -ls /usr
total 54
[edited for brevity]
 2 drwxr-xr-x   3 root  wheel   512 Jun  5 10:45 obj
 2 drwxr-xr-x  21 root  wheel   512 Jun  9 15:54 src

on squelcher (the nfs server) I see this in messages when I start/sighup 
mountd.

squelcher# tail  /var/log/messages
Jul  9 13:09:00 squelcher mountd[16110]: can't change attributes for /usr/obj
Jul  9 13:09:00 squelcher mountd[16110]: bad exports list line /usr/obj -ro 
-maproot
Jul  9 13:09:29 squelcher mountd[16110]: mount request denied from 10.1.2.60 
for /usr/obj

and Access denied messages on lappy (the nfs client).

The permissions look the same, the exports lines are the same, and I must be 
missing something but I can't figure out what it is.

-Mark

___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: NFS Problems...

2003-06-05 Thread supote
 There shouldn't BE a /home2 on HTTPD but I figured out what
 happened. I
 copied /etc/group /etc/passwd /etc/pwd.db 
 and /etc/master.passwd from NFSD
 to HTTPD to sync users and passwords and forgot to edit the 
 home dir with
 vipw. I just did a global search and replace of /home2 to /home 
 and
 rebuilt the pwd.db and it mounted fine and apache now 
 serves public_html
 from the users shells.

 However, on reboot it doesn't mount from /etc/fstab, I have to 
mount it
 manually for some reason. So now I'm down to the one NFS 
problem.

 on NFSD: (/etc/exports)
 /home2   -maproot=0 -alldirs httpd

 on HTTPD: (/etc/fstab)
 NFSD:/home2  /home   nfs rw,bg   0   0

 manually mounting works
 mount NFSD:/home2 /home

 Works fine until I reboot. Shouldn't it mount by itself like it used 
to?
 What am I missing now? Why doesn't HTTPD mount
 NFSD:/home2 on /home when
 it reboots?

 TIA

Hi,
   In your fstab, what is the bg purpose ? I never seen this option
before. Make sure such option is usable.

Pote


___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


Re: NFS Problems...

2003-06-05 Thread Joshua Oreman
On Thu, Jun 05, 2003 at 01:03:17AM +0200 or thereabouts, Bernd Walter seemed to write:
 On Wed, Jun 04, 2003 at 02:21:29PM -0700, jle wrote:
  I retired my old p200 fbsd 4.4-stable web server and built a newer box for
  it. I used to mount the /home2 dir from my nfs server (fbsd 5.1-current)
  to /home on the webserver and it used to work fine but now it doesn't
  mount  /home2 on /home on boot up. I can manually mount it but then it
  gets confused and thinks it's mounted on /home2 when it's not. Evidently
  something must have changed since 4.4-S because it worked until today.
  
  on NFSD: (/etc/exports)
  /home2   -maproot=0 -alldirs httpd
  
  on HTTPD: (/etc/fstab)
  NFSD:/home2  /home   nfs rw,bg   0   0
  
  manually mounting
  mount NFSD:/home2 /home
  
  [EMAIL PROTECTED]:~ -13:55:06- # cd ~dkdesign
  -su: cd: /home2/dkdesign: No such file or directory
 
 Not surprising, because you mounted on /home not /home2.
 
  [EMAIL PROTECTED]:~ -13:58:45- # cd /home/dkdesign/
  [EMAIL PROTECTED]:/home/dkdesign -14:02:21- # ls -al
  drwxr-xr-x   2 dkdesign  dkdesign  512 Mar 13 09:15 public_html/
 
 Yes - that's /home, only /home2 is failing...
 Works as designed.
 
  From /var/log/httpd-error.log:
  [Wed Jun  4 13:56:45 2003] [error] [client xxx.xxx.xxx.xxx] File does not
  exist: /home2/dkdesigns/public_html/
  
  I don't get it. Any help?
 
 ed /etc/fstab
 /home2
 s/home/home2/

This should be
s/home2/home/
or it'll have a line with home22

-- Josh

 w
 q
 
 -- 
 B.Walter   BWCThttp://www.bwct.de
 [EMAIL PROTECTED]  [EMAIL PROTECTED]
 
 ___
 [EMAIL PROTECTED] mailing list
 http://lists.freebsd.org/mailman/listinfo/freebsd-questions
 To unsubscribe, send any mail to [EMAIL PROTECTED]
___
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to [EMAIL PROTECTED]


NFS problems -- performance degraded after upgrade to 4.7

2002-12-31 Thread Daniel Schrock
Hello all,

I'm having some problems with nfs performance.  Here is a bit of background.
I have a fileserver running FreeBSD 4.7-STABLE.  This has a 3ware
Escalade with 4 x 80GB Maxtors in a RAID 5 config.  I am exporting
/usr/ports, among other things - 5 exports total to 2 clients.  The
clients are all running 4.7-Stable as well.  Under 4.6.2, I was able to
install ports with no problems on the clients.  Now, patches don't seem
to always work, files aren't always getting created, and 'Server
Ret-Failed' from nfsstat -s increments quite rapidly.

I hope there isn't any problems following this.

# relevant info section:




###Fileserver###

13:41:51:d_jab@aluminum (/dev/ttyp5): 15  ~

grep nfs /etc/rc.conf

nfs_reserved_port_only=YES
nfs_server_enable=YES
nfs_server_flags=-u -t -n 20 -h 192.168.23.200

13:40:12:d_jab@aluminum (/dev/ttyp5): 12  ~

grep twed /etc/fstab

/dev/twed0s1b   noneswapsw  0   0
/dev/twed0s1h   /home   ufs rw,noatime,nosuid,nodev
2   2
/dev/twed0s1e   /usrufs rw,noatime
2   2
/dev/twed0s1d   /usr/local/mp3  ufs
rw,noatime,nosuid,nodev 2   2
/dev/twed0s1a   /usr/local/storage  ufs
rw,noatime  2   2
/dev/twed0s1g   /usr/ports  ufs rw,noatime
2   2
/dev/twed0s1f   /usr/srcufs rw,noatime
2   2

13:40:23:d_jab@aluminum (/dev/ttyp5): 13  ~

cat /etc/exports

/usr/src-maproot=root   -network 192.168.23 -mask 255.255.255.0
/usr/ports  -maproot=root   -network 192.168.23 -mask 255.255.255.0
/home   -maproot=root   -network 192.168.23 -mask 255.255.255.0
/usr/local/mp3  -maproot=root   -network 192.168.23 -mask 255.255.255.0
/usr/local/storage  -maproot=root   -network 192.168.23 -mask
255.255.255.0

13:40:58:d_jab@aluminum (/dev/ttyp5): 14  ~

nfsstat -s


Server Info:
  Getattr   SetattrLookup  Readlink  Read WriteCreate
 Remove
 5749 6717648013560191927114932 40335
  15174
   Rename  Link   Symlink Mkdir Rmdir   Readdir  RdirPlus
 Access
23627 9   192   909   73947 40300
 661793
MknodFsstatFsinfo  PathConfCommitGLeaseVacate
  Evict
0 5311810 0 53650 0 0
  0
Server Ret-Failed
   242570
Server Faults
0
Server Cache Stats:
   Inprog  Idem  Non-idemMisses
0 4 0   803
Server Lease Stats:
   Leases PeakL   GLeases
0 0 0
Server Write Gathering:
 WriteOps  WriteRPC   Opsaved
   11490611493226


## Client ##


13:43:50:d_jab@carbon (/dev/ttyp2): 6  ~

grep nfs /etc/rc.conf

nfs_reserved_port_only=YES
nfs_client_enable=YES
nfs_client_flags=-n 6

12:59:55:d_jab@carbon (/dev/ttyp2): 4  ~

grep nfs /etc/fstab

al:/home/home   nfs
rw,nfsv3,tcp,intr,bg,rdirplus,noatime,-r=32768,-w=32768   00
al:/usr/ports   /usr/ports  nfs
rw,nfsv3,tcp,intr,bg,rdirplus,noatime,-r=32768,-w=327680   0
al:/usr/src /usr/srcnfs
rw,nfsv3,tcp,intr,bg,rdirplus,noatime,-r=32768,-w=327680   0
al:/usr/local/mp3   /usr/local/mp3  nfs
rw,nfsv3,tcp,intr,bg,rdirplus,noatime,-r=32768,-w=32768   00
al:/usr/local/storage   /usr/local/storage  nfs
rw,nfsv3,tcp,intr,bg,rdirplus,noatime,-r=32768,-w=327680   0

13:42:42:d_jab@carbon (/dev/ttyp2): 5  ~

nfsstat -c

Client Info:
Rpc Counts:
  Getattr   SetattrLookup  Readlink  Read WriteCreate
 Remove
  27410045059600363209637193806 55717
  26701
   Rename  Link   Symlink Mkdir Rmdir   Readdir  RdirPlus
 Access
2993927   195  1088  1084 0 43879
 774525
MknodFsstatFsinfo  PathConfCommitGLeaseVacate
  Evict
0 5665116 0 41432 0 0
  0
Rpc Info:
 TimedOut   Invalid X Replies   Retries  Requests
0 010   617   2131487
Cache Info:
Attr HitsMisses Lkup HitsMisses BioR HitsMisses BioW Hits
 Misses
  8475243579261   2735547595897   4225460198743320897
 193806
BioRLHitsMisses BioD HitsMisses DirE HitsMisses
  71863 69134 41254 5036714


#

Does anyone have any advice?
Let me know if you need more info.

Thanks
.daniel


To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



Re: NFS Problems FreeBSD -- Solaris

2002-09-21 Thread Radko Keves

;), Thu, Sep 19, 2002 at 05:58:43PM +, Weston M. Price said that
hi all
i have several problem but with IPv6
my box and solaris box was comunicationg with IPv6 but nfs not ;(
try to set IPs in IPv4 format not IPv6 or hostname
for example mount not for kripel.studnet.sk but 193.87.12.67 and so on
 Hello,
   I am attempting to mount a few directories from my Solaris machine(s) to my 
 FreeBSD workstation. nfsd is clearly running on Solaris and the sharing the 
 directories is not a problem. When I attempt to mount the directories on 
 FreeBSD I get the following error: 
 
 damascus:/usr/wmprice: RPCMNT: clnt_create: RPC: Program not registered
that's it
send me your /etc/exports if i'm wrong
replace hostnames and get there IPv4 adreses
 
 A simple ps -x | egrep shows that nfsiod is running
 
 ps -x | egrep nfsiod
 
 98  ??  I  0:00.00 nfsiod -n 4
 99  ??  I  0:00.00 nfsiod -n 4
 100  ??  I  0:00.00 nfsiod -n 4
 101  ??  I  0:00.00 nfsiod -n 4
 
 I have this configured to begin at startup. 
 
 So, what am I doing wrong? This would seem to me to be a pretty simple 
 procedure. Any help would be appreciated. 
 
 Weston
 
 To Unsubscribe: send mail to [EMAIL PROTECTED]
 with unsubscribe freebsd-questions in the body of the message\

bye
-- 
17:08  up 3 days, 19:49, 16 users, load averages: 0,15 0,07 0,02
--
FreeBSD 5.0-CURRENT #15: root@kripel:/usr/src/sys/i386/compile/angel
--
powered by [EMAIL PROTECTED]

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message



Re: NFS Problems FreeBSD -- Solaris

2002-09-19 Thread Dan Nelson

In the last episode (Sep 19), Weston M. Price said:
   I am attempting to mount a few directories from my Solaris
 machine(s) to my FreeBSD workstation. nfsd is clearly running on
 Solaris and the sharing the directories is not a problem. When I
 attempt to mount the directories on FreeBSD I get the following
 error:
 
 damascus:/usr/wmprice: RPCMNT: clnt_create: RPC: Program not registered

Make sure rpcbind, mountd, and nfsd are running on the Solaris box.  If
they are, what does the output of rpcinfo -p damanscus print?

-- 
Dan Nelson
[EMAIL PROTECTED]

To Unsubscribe: send mail to [EMAIL PROTECTED]
with unsubscribe freebsd-questions in the body of the message