Terry Lambert wrote:
I am now going to predict that your ethernet is multihomed,
and that you have more than one IP address on the server, the
client, or both.
That is true, I de-multihomed the server but the problem persists.
The client is not multihomed. The server also has INET6 in the kernel
th
Pawel Worach wrote:
> Here is some more information.
> I realized that i had tcp and udp "blackholing" enabled on the server so i
> disabled that, still no dice.
> disabled rpc.statd and rpc.lockd, still no dice.
[ ... ]
> So it looks like what i said before, only tcp seems to cause this.
I am now
Jason Stone wrote:
> We actually had this discussion already over on -performance (and I get
> what you're saying), but the interesting question here is, why is 5.1
> behaving so differently from 4-stable on identical hardware under
> identical load.
Because an absolute ton of code was rewritten.
Pawel Worach wrote:
Robert Watson wrote:
On Wed, 27 Aug 2003, Pawel Worach wrote:
Ok, so let me see if I have the sequence of events straight:
Hope this is not as confusing as my previus mail :)
Here is some more information.
I realized that i had tcp and udp "blackholing" enabled on the server
On Thu, 28 Aug 2003, Alexander Leidinger wrote:
> There's no lockd running, only the statd on the server, so we already
> can rule out the lockd.
You probably want to shut down statd on the server as well. Since statd
is only used to recover locks on reboots, it is of no use without lockd,
and i
On Thu, 28 Aug 2003 08:54:07 -0400 (EDT)
Robert Watson <[EMAIL PROTECTED]> wrote:
> Ok, so let me see if I have the sequence of events straight:
>
> (1) Boot a 4.8-RELEASE/STABLE NFS server
> (2) Boot a 5.1-RELEASE/CURRENT NFS client
> (3) Mount a file system using TCP NFSv3
> (4) Reboot the clie
Robert Watson wrote:
On Wed, 27 Aug 2003, Pawel Worach wrote:
Ok, so let me see if I have the sequence of events straight:
(1) Boot a 4.8-RELEASE/STABLE NFS server
(2) Boot a 5.1-RELEASE/CURRENT NFS client
(3) Mount a file system using TCP NFSv3
(4) Reboot the client system, reboot, and remount
(
On Thu, 28 Aug 2003, Terry Lambert wrote:
> Pawel Worach wrote:
> [ ... subject ... ]
>
> > This only seem to happen for nfs over tcp.
>
> That's strange; most of the problems I've ever seen are from using UDP,
> large read/write sizes, and then droping one packet out of a bunch of
> frags caus
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> > I'm also seeing a similar problem - I have a cluster of high-volume
> > mailservers delivering mail over nfs to maildirs on a netapp. The cluster
> > was all 4-stable, but I decided to mix a couple of 5.1 boxes in to see how
> > they would do.
[.
On Wed, 27 Aug 2003, Pawel Worach wrote:
> I get the errors every time the nfs mounts are not unmounted "cleanly",
> if the client (which is a laptop and i often forget to plug in the power
> so the battery dies) dies and the server is rebooted the client boots
> fine, i.e. no "nfs server not res
Jason Stone wrote:
> I'm also seeing a similar problem - I have a cluster of high-volume
> mailservers delivering mail over nfs to maildirs on a netapp. The cluster
> was all 4-stable, but I decided to mix a couple of 5.1 boxes in to see how
> they would do.
>
> The 5.1 boxes accepted and queued
Pawel Worach wrote:
[ ... subject ... ]
> This only seem to happen for nfs over tcp.
That's strange; most of the problems I've ever seen are from
using UDP, large read/write sizes, and then droping one packet
out of a bunch of frags caused by the MTU being much smaller
than the read/write size (m
Robert Watson wrote:
> I have a very similar configuration, but it sounds like I'm not bumping
> into the same problem. Are you using NFSv2 or v3, and how many file
> systems are you mounting? Are you generally using UFS1 or UFS2? Right
> now, I'm mounting a single UFS2 file system was the root,
On Wed, 27 Aug 2003 09:06:41 -0400 (EDT)
Robert Watson <[EMAIL PROTECTED]> wrote:
> I have a very similar configuration, but it sounds like I'm not bumping
> into the same problem. Are you using NFSv2 or v3, and how many file
> systems are you mounting? Are you generally using UFS1 or UFS2? Rig
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
> In this configuration I see a lot of "nfs server ...: is not responding"
> and "nfs server ...: is alive again" when I copy large files (e.g. a CD
> image). All of them happen in the same second. I haven't looked at the
> state or priority of the cp
On Wed, Aug 27, 2003 at 09:06:41AM -0400, Robert Watson wrote:
> I have a very similar configuration, but it sounds like I'm not bumping
> into the same problem. Are you using NFSv2 or v3, and how many file
> systems are you mounting? Are you generally using UFS1 or UFS2? Right
> now, I'm mounti
Robert Watson wrote:
I have a very similar configuration, but it sounds like I'm not bumping
into the same problem. Are you using NFSv2 or v3, and how many file
systems are you mounting? Are you generally using UFS1 or UFS2? Right
now, I'm mounting a single UFS2 file system was the root, and I
On Wed, 27 Aug 2003, Pawel Worach wrote:
> >In this configuration I see a lot of "nfs server ...: is not responding"
> >and "nfs server ...: is alive again" when I copy large files (e.g. a CD
> >image). All of them happen in the same second. I haven't looked at the
> >state or priority of the cp
Alexander Leidinger wrote:
In this configuration I see a lot of "nfs server ...: is not responding"
and "nfs server ...: is alive again" when I copy large files (e.g. a CD
image). All of them happen in the same second. I haven't looked at the
state or priority of the cp process when this happens.
On Sun, 24 Aug 2003 23:59:58 -0400
Bill Moran <[EMAIL PROTECTED]> wrote:
> Mike B wrote:
> > I'm running an nfs server from a freebsd 4.8 box and accessing in from a
> > 5.1 client machine. On small transfers I usually have no problems but
> > when I run a high bandwidth task (normalizing audio
Mike B wrote:
I'm running an nfs server from a freebsd 4.8 box and accessing in from a
5.1 client machine. On small transfers I usually have no problems but
when I run a high bandwidth task (normalizing audio tracks) the
normalize process often gets stuck in the getblck state or nfsread
state.
21 matches
Mail list logo