Hi,
When pho@ was doing some NFS testing, he got the
following crash, which I can't figure out. (As far
as I can see, INP_WLOCK() is always held when
tp-t_state = TCPS_CLOSED and it is held from before
the test for TCPS_CLOSED in tcp_input() up until
the tcp_do_segment() call. As such, I don't
,
__func__));
Or, if there's someone more familiar with the locking in these paths,
they may be able to come up with a way to restructure the locks and
logic to close this window.
-Matt
On Thu, Apr 4, 2013 at 6:33 PM, Rick Macklem rmack...@uoguelph.ca
David Wolfskill wrote:
On Wed, Aug 21, 2013 at 07:12:38PM +0200, Andre Oppermann wrote:
On 13.08.2013 19:29, Julian Elischer wrote:
I have been tracking down a performance embarrassment on AMAZON
EC2 and have found it I think.
Our OS cousins over at Linux land have implemented some
Martin Laabs wrote:
Hi,
I set up a nfsv4 server with kerberos but when starting the nfs
server on
the arm (RBI-B) board I get the following error message and the first
(managing part) of the nfs exits:
nfsd: can't register svc name
This error message is produced by the following code
, am hoping it makes it into 10.0,
but it may be too late.)
rick
---Mike
BC
*From:* Mike Tancsa m...@sentex.net
*To:* Rick Macklem rmack...@uoguelph.ca
*Cc:* FreeBSD Net n...@freebsd.org; David
George Neville-Neil wrote:
On Aug 29, 2013, at 7:49 , Adrian Chadd adr...@freebsd.org wrote:
Hi,
There's a lot of good stuff to review here, thanks!
Yes, the ixgbe RX lock needs to die in a fire. It's kinda pointless
to keep
locking things like that on a per-packet basis. We
Sam Fourman Jr. wrote:
And any time you increase latency, that will have a negative impact
on
NFS performance. NFS RPCs are usually small messages (except Write
requests
and Read replies) and the RTT for these (mostly small,
bidirectional)
messages can have a significant impact
Christopher Forgeron wrote:
Hello,
I can report this problem as well on 10.0-RELEASE.
I think it's the same as kern/183390?
I have two physically identical machines, one running 9.2-STABLE, and
one
on 10.0-RELEASE.
My 10.0 machine used to be running 9.0-STABLE for
actl=65542
Mar 20 16:37:23 SAN0 kernel: before pklen=65542 actl=65542
Mar 20 16:37:23 SAN0 kernel: after mbcnt=33 pklen=65542 actl=65
On Wed, Mar 19, 2014 at 11:29 PM, Rick Macklem rmack...@uoguelph.ca
wrote:
Christopher Forgeron wrote:
Hello,
I can report
Christopher Forgeron wrote:
On Thu, Mar 20, 2014 at 7:40 AM, Markus Gebert
markus.geb...@hostpoint.ch wrote:
Possible. We still see this on nfsclients only, but I’m not convinced
that nfs is the only trigger.
Since Christopher is getting a bunch of the before
Markus Gebert wrote:
On 20.03.2014, at 14:51, woll...@bimajority.org wrote:
In article 21290.60558.750106.630...@hergotha.csail.mit.edu, I
wrote:
Since we put this server into production, random network system
calls
have started failing with [EFBIG] or maybe sometimes [EIO].
for a long time, and it's only on this
10.0
build that I have issues.
I'll be following up tomorrow with info on either outcome.
Thanks for your help.. your rusty networking is still better than
mine. :-)
On Thu, Mar 20, 2014 at 11:13 PM, Rick Macklem rmack...@uoguelph.ca
wrote
Christopher Forgeron wrote:
(Pardon me, for some reason my gmail is sending on my cut-n-pastes if
I cr
down too fast)
First set of logs:
Mar 21 11:07:00 SAN0 kernel: before pklen=65542 actl=65542 csum=4116
Ok, so this isn't a TSO segment then, unless I don't understand how
the csum flags
Christopher Forgeron wrote:
Hello all,
I ran Jack's ixgbe MJUM9BYTES removal patch, and let iometer hammer
away at the NFS store overnight - But the problem is still there.
From what I read, I think the MJUM9BYTES removal is probably good
cleanup (as long as it doesn't trade
patching into 10.0, I'll move into the
base
10.0 code tomorrow.
On Fri, Mar 21, 2014 at 8:44 PM, Rick Macklem rmack...@uoguelph.ca
wrote:
Christopher Forgeron wrote:
Hello all,
I ran Jack's ixgbe MJUM9BYTES removal patch, and let iometer
hammer
away at the NFS
Christopher Forgeron wrote:
Status Update: Hopeful, but not done.
So the 9.2-STABLE ixgbe with Rick's TSO patch has been running all
night
while iometer hammered away at it. It's got over 8 hours of test time
on
it.
It's still running, the CPU queues are not clogged, and everything is
- I assume others assumed this.
On Fri, Mar 21, 2014 at 11:39 PM, Rick Macklem rmack...@uoguelph.ca
wrote:
Christopher Forgeron wrote:
It may be a little early, but I think that's it!
It's been running without error for nearly an hour - It's very rare
it
would go
of
9000 - I assume others assumed this.
On Fri, Mar 21, 2014 at 11:39 PM, Rick Macklem rmack...@uoguelph.ca
wrote:
Christopher Forgeron wrote:
It may be a little early, but I think that's it!
It's been running without error for nearly an hour - It's very rare
Christopher Forgeron wrote:
On Sat, Mar 22, 2014 at 6:41 PM, Rick Macklem rmack...@uoguelph.ca
wrote:
Christopher Forgeron wrote:
#if defined(INET) || defined(INET6)
/* Initialize to max value. */
if (ifp-if_hw_tsomax == 0)
ifp-if_hw_tsomax = IP_MAXPACKET;
KASSERT
the new ip_len with ip-ip_len or other variables?
On Sun, Mar 23, 2014 at 12:25 PM, Christopher Forgeron
csforge...@gmail.com wrote:
On Sat, Mar 22, 2014 at 11:58 PM, Rick Macklem rmack...@uoguelph.ca
wrote:
Christopher Forgeron wrote:
Also should
Julian Elischer wrote:
On 3/23/14, 4:57 PM, Rick Macklem wrote:
Christopher Forgeron wrote:
On Sat, Mar 22, 2014 at 6:41 PM, Rick Macklem
rmack...@uoguelph.ca
wrote:
Christopher Forgeron wrote:
#if defined(INET) || defined(INET6)
/* Initialize to max value
Christopher Forgeron wrote:
I'm going to split this into different posts to focus on each topic.
This
is about setting IP_MAXPACKET to 65495
Update on Last Night's Run:
(Last night's run is a kernel with IP_MAXPACKET = 65495)
- Uptime on this run: 10:53AM up 13:21, 5 users, load
Markus Gebert wrote:
On 24.03.2014, at 16:21, Christopher Forgeron csforge...@gmail.com
wrote:
This is regarding the TSO patch that Rick suggested earlier. (With
many
thanks for his time and suggestion)
As I mentioned earlier, it did not fix the issue on a 10.0 system.
It did
Christopher Forgeron wrote:
This is regarding the TSO patch that Rick suggested earlier. (With
many thanks for his time and suggestion)
As I mentioned earlier, it did not fix the issue on a 10.0 system. It
did make it less of a problem on 9.2, but either way, I think it's
not
Julian Elischer wrote:
- Original Message -
I wrote (and snipped):
Other drivers (and ixgbe for the 82598 chip) can handle a packet that
is in more than 32 mbufs. (I think the 82598 handles 100, grep for
SCATTER
in *.h in sys/dev/ixgbe.)
the Xen backend can not handle mor ethan 32
Markus Gebert wrote:
On 25.03.2014, at 02:18, Rick Macklem rmack...@uoguelph.ca wrote:
Christopher Forgeron wrote:
This is regarding the TSO patch that Rick suggested earlier. (With
many thanks for his time and suggestion)
As I mentioned earlier, it did not fix
Markus Gebert wrote:
On 25.03.2014, at 22:46, Rick Macklem rmack...@uoguelph.ca wrote:
Markus Gebert wrote:
On 25.03.2014, at 02:18, Rick Macklem rmack...@uoguelph.ca
wrote:
Christopher Forgeron wrote:
This is regarding the TSO patch that Rick suggested earlier
Hi,
First off, I hope you don't mind that I cross-posted this,
but I wanted to make sure both the NFS/iSCSI and networking
types say it.
If you look in this mailing list thread:
http://docs.FreeBSD.org/cgi/mid.cgi?1850411724.1687820.1395621539316.JavaMail.root
you'll see that several people
Christopher Forgeron wrote:
Update:
I'm changing my mind, and I believe Rick's TSO patch is fixing
things
(sorry). In looking at my notes, it's possible I had lagg on for
those
tests. lagg does seem to negate the TSO patch in my case.
Ok, that's useful information. It implies that
Markus Gebert wrote:
On 26.03.2014, at 00:06, Christopher Forgeron csforge...@gmail.com
wrote:
Update:
I'm changing my mind, and I believe Rick's TSO patch is fixing
things
(sorry). In looking at my notes, it's possible I had lagg on for
those
tests. lagg does seem to negate
pyu...@gmail.com wrote:
On Tue, Mar 25, 2014 at 07:10:35PM -0400, Rick Macklem wrote:
Hi,
First off, I hope you don't mind that I cross-posted this,
but I wanted to make sure both the NFS/iSCSI and networking
types say it.
If you look in this mailing list thread:
http
Christopher Forgeron wrote:
On Tue, Mar 25, 2014 at 8:21 PM, Markus Gebert
markus.geb...@hostpoint.ch wrote:
Is 65517 correct? With Ricks patch, I get this:
dev.ix.0.hw_tsomax: 65518
Perhaps a difference between 9.2 and 10 for one of the macros? My
code is:
enough for that
driver, why wouldn't it be good enough for lagg?
Unless people think I'm crazy, I'll compile that in at line 356 in
if_lagg.c and give it a test run tomorrow.
This may need to go into vlan and carp as well, I'm not sure yet.
On Tue, Mar 25, 2014 at 8:16 PM, Rick
for amd64 systems with lots of memory would be ok?
On Wed, Mar 26, 2014 at 11:44 PM, Marcelo Araujo
araujobsdp...@gmail.comwrote:
Hello All,
2014-03-27 8:27 GMT+08:00 Rick Macklem rmack...@uoguelph.ca:
Well, bumping it from 32-35 is all it would take for NFS (can't
comment
Marcelo Araujo wrote:
Hello All,
2014-03-27 8:27 GMT+08:00 Rick Macklem rmack...@uoguelph.ca:
Well, bumping it from 32-35 is all it would take for NFS (can't
comment
w.r.t. iSCSI). ixgbe uses 100 for the 82598 chip and 32 for the
82599
(just so others aren't confused by the above
Christopher Forgeron wrote:
On Wed, Mar 26, 2014 at 9:35 PM, Rick Macklem rmack...@uoguelph.ca
wrote:
I've suggested in the other thread what you suggested in a recent
post...ie. to change the default, at least until the propagation
of driver set values is resolved
Yonghyeon Pyun wrote:
On Wed, Mar 26, 2014 at 08:27:48PM -0400, Rick Macklem wrote:
pyu...@gmail.com wrote:
On Tue, Mar 25, 2014 at 07:10:35PM -0400, Rick Macklem wrote:
Hi,
First off, I hope you don't mind that I cross-posted this,
but I wanted to make sure both the NFS
of weeks.
I will be able to commit the one line change that reduces the default
setting for if_hw_tsomax in a couple of weeks, so it should be in stable
in about 1month.
rick
Regards
Simon
于 14-3-28 6:44, Rick Macklem 写道:
Christopher Forgeron wrote:
On Wed, Mar 26, 2014 at 9
Frank Volf wrote:
Hello,
I'm experiencing watchdog timeouts with my Realtek interface card.
I'm using a fairly new system (Shuttle DS47), running FreeBSD
10-STABLE.
For this shuttle a patch has been recently committed to SVN to make
this
card work at all (revision *262391*
John Nielsen wrote:
On Apr 17, 2014, at 2:38 PM, Andrea Venturoli m...@netfence.it wrote:
Three days ago I upgraded an amd64 8.3 box to the latest 8.4.
Since then the outside network is misbehaving: large mails are not
sended (although small ones do), svn operations will work for a
Marek Salwerowicz wrote:
Hi list,
I have two FreeBSD boxes (both based on SuperMicro X9DRD-7LN4F-JBOD
motherboard, with 32GB RAM, 1 CPU :Intel(R) Xeon(R) CPU E5-2640 v2)
storage1% uname -a
FreeBSD storage1 9.1-RELEASE-p10 FreeBSD 9.1-RELEASE-p10 #0: Sun Jan
12
20:11:23 UTC 2014
Marek Salwerowicz wrote:
W dniu 2014-04-25 13:48, Rick Macklem pisze:
Well, you don't mention what command(s) you are using to transfer
the
data, but I would guess you have one serial data transfer for each
command.
(Put another way, if you are only running one command to transfer
Steven Hartland wrote:
- Original Message -
From: Marek Salwerowicz marek_...@wp.pl
To: Steven Hartland kill...@multiplay.co.uk; Gerrit Kühn
gerrit.ku...@aei.mpg.de
Cc: freebsd-net@freebsd.org
Sent: Friday, April 25, 2014 2:06 PM
Subject: Re: NFS over LAGG / lacp poor performance
John Howie wrote:
Hi all,
I apologize for the cross posting of this email, but I believe it
will be
of interest to people across all three groups. Please feel free to
forward
to additional groups if you feel they would benefit.
I have seen a few posts on and off over the years about
code just grabs the first server to reply at each stage (going
through each NIC in turn) and has affinity to it for the remainder
of that stage but not through the entire boot process.
Regards,
John
Sent from my iPhone
On Jun 1, 2014, at 19:01, Rick Macklem rmack...@uoguelph.ca
Beeblebrox wrote:
I'm using grub2 as the pxe bootloader rather than BTX's pxeboot.
I get Grub to load kernel and all necessary modules and boot. It goes
as far
as mountroot and stops. ? fails to show any pxe devices. If I
recall
correctly, when booting from BTX and upon hitting a
Beeblebrox wrote:
when trying to use Grub the client now reboots at the end of
kernel
loading
Sooo, I got curious and compiled a full debug kernel that included
{options
BOOTP, options BOOTP_NFSROOT} knobs. From the grub menu, without
changing
any grub.cfg lines (as previously posted)
Johan Kooijman wrote:
Hey all,
A while ago I started the thread 9.2 ixgbe tx queue hang on
freebsd-net.
A lot more people had the same as I had. Could anybody tell me if
this
issue is resolved in 9.3 RC? I haven't been able to find anything
about it.
Well, it depends on your definition
Beeblebrox wrote:
Hi Rick,
Btw, there is BOOTP_DEBUG stuff in bootp_subr.c. If your debug
kernel
didn't include that option, trying it might tell you where it
breaks?
I got around to trying the BOOTP_DEBUG knob, it breaks the kernel
build
right from start. bootp_subr.c does have
Russell L. Carter wrote:
Greetings!
It's been 14 years. OMG do I love poudriere and zfs.
But apropos of this post from last January:
http://lists.freebsd.org/pipermail/freebsd-net/2014-January/037547.html
I am going to capitalize READ to emphasize that all I am looking at
here is
Russell L. Carter wrote:
On 07/02/14 17:17, Michael Sierchio wrote:
sysctl net.inet.tcp.tso=0
Woot! 2.7MB/s - 76MB/s
Is this a FAQ somewhere? I spent a long time looking
today and didn't find it (I do recall seeing a mention
of TSO but not seeing how to do it or that it fixed
the
Russell L. Carter wrote:
On 07/02/14 19:09, Rick Macklem wrote:
Could you please post the dmesg stuff for the network interface,
so I can tell what driver is being used? I'll take a look at it,
in case it needs to be changed to use m_defrag().
em0: Intel(R) PRO/1000 Network
Russell L. Carter wrote:
On 07/03/14 17:51, Rick Macklem wrote:
Well, I took a quick look at the driver and it does use m_defrag(),
but
I think that the retry: label it does a goto after doing so might
be in
the wrong place.
The attached untested patch might fix
Hans Petter Selasky wrote:
Hi,
I'm asking for some input on the attached m_dup() patch, so that
existing functionality or dependencies are not broken. The background
for the change is to allow m_dup() to defrag long mbuf chains that
doesn't fit into a specific hardware's scatter gather
Hans Petter Selasky wrote:
Hi,
Would it be better if my patch used the PAGE_SIZE clusters instead
of
the 16K ones? Then it should not be affected by memory
defragmentation.
Thanks for shedding some light into this area?
Well, I ran into the threads stuck on btalloc when I used
John-Mark Gurney wrote:
Hans Petter Selasky wrote this message on Mon, Jul 07, 2014 at 10:12
+0200:
I'm asking for some input on the attached m_dup() patch, so that
existing functionality or dependencies are not broken. The
background
for the change is to allow m_dup() to defrag long
John Baldwin wrote:
On Thursday, July 03, 2014 8:51:01 pm Rick Macklem wrote:
Russell L. Carter wrote:
On 07/02/14 19:09, Rick Macklem wrote:
Could you please post the dmesg stuff for the network
interface,
so I can tell what driver is being used? I'll take a look
Yonghyeon Pyun wrote:
On Fri, Jul 11, 2014 at 09:54:23AM -0400, John Baldwin wrote:
On Thursday, July 10, 2014 6:31:43 pm Rick Macklem wrote:
John Baldwin wrote:
On Thursday, July 03, 2014 8:51:01 pm Rick Macklem wrote:
Russell L. Carter wrote:
On 07/02/14 19:09
Garrett Wollman wrote:
In article 201407151034.54681@freebsd.org, j...@freebsd.org
writes:
Hmm, I am surprised by the m_pullup() behavior that it doesn't just
notice that the first mbuf with a cluster has the desired data
already
and returns without doing anything.
The specification
Russell L. Carter wrote:
Hi,
Still a newbie here after more than ae decade off. This is most
likely a stupid question but I have invested an inordinate amount of
effort to figure it out with no success. That is, I have not
resorted
to digging down into the source code to understand the
Russell L. Carter wrote:
On 07/28/14 05:55, Rick Macklem wrote:
Assuming /export is one file system on the server, put all
the exports in a single entry, something like:
V4: /export -sec=sys -network 10.0.10 -mask 255.255.255.0
/export/usr/src /export/usr/obj /export/usr/ports
John-Mark Gurney wrote:
Rick Macklem wrote this message on Mon, Jul 28, 2014 at 18:47 -0400:
Russell L. Carter wrote:
On 07/28/14 05:55, Rick Macklem wrote:
Assuming /export is one file system on the server, put all
the exports in a single entry, something like:
V4: /export
Russell L. Carter:
On 07/29/14 11:21, John-Mark Gurney wrote:
Rick Macklem wrote this message on Mon, Jul 28, 2014 at 18:47
-0400:
Russell L. Carter wrote:
On 07/28/14 05:55, Rick Macklem wrote:
Assuming /export is one file system on the server, put all
the exports in a single
Russell L. Carter wrote:
On 07/29/14 13:48, Rick Macklem wrote:
Russell L. Carter:
The directories within a file system exports are only enforced by
the Mount protocol that NFSv3 uses to talk to mountd. (NFSv4 does
not
use the Mount protocol.) These are considered
Hans Petter Selesky wrote:
On 09/05/14 23:19, Eric Joyner wrote:
There are some concerns if we use this with devices that ixl
supports:
- The maximum fragment size is 16KB-1, which isn't a power of 2.
Hi Eric,
Multiplying by powers of two are more fast, than non-powers of two.
So
Hans Petter Selasky wrote:
Hi,
I've tested the attached patch with success and would like to have
some
feedback from other FreeBSD network developers. The problem is that
the
current TSO limitation only limits the number of bytes that can be
transferred in a TSO packet and not the number
Hans Petter Selasky wrote:
On 09/06/14 00:09, Rick Macklem wrote:
Hans Petter Selesky wrote:
On 09/05/14 23:19, Eric Joyner wrote:
There are some concerns if we use this with devices that ixl
supports:
- The maximum fragment size is 16KB-1, which isn't a power of 2.
Hi Eric
Hans Petter Selasky wrote:
On 09/06/14 00:09, Rick Macklem wrote:
Hans Petter Selesky wrote:
On 09/05/14 23:19, Eric Joyner wrote:
There are some concerns if we use this with devices that ixl
supports:
- The maximum fragment size is 16KB-1, which isn't a power of 2.
Hi Eric
perryh wrote:
Jason Hellenthal jh...@dataix.net wrote:
See siftr(4). This module writes to a file.
Is siftr(4) new since 8.1?
$ man siftr
No manual entry for siftr
$ cd /usr/ports
$ ls -d */*siftr*
ls: */*siftr*: No such file or directory
You can look at:
Eugene M. Zheganin wrote:
Hi.
I'm suffering from low network performance on one of my FreeBSDs.
I have an i386 8.2-RELEASE machine with an fxp(4) adapter. It's
connected though a bunch of catalysts 2950 to another 8.2. While other
machines in this server room using the same sequence of
Giulio Ferro wrote:
I'm trying to setup a kerberized NFS system made of a server and a
client (both freebsd 9 amd64 stable)
I've tried to follow this howto:
http://code.google.com/p/macnfsv4/wiki/FreeBSD8KerberizedNFSSetup
But couldn't get much out of it.
First question : is this howto
Yuri Pankov wrote:
On Fri, Jan 27, 2012 at 06:58:47PM +0100, Giulio Ferro wrote:
I'm trying to setup a kerberized NFS system made of a server and a
client (both freebsd 9 amd64 stable)
I've tried to follow this howto:
http://code.google.com/p/macnfsv4/wiki/FreeBSD8KerberizedNFSSetup
Yuri Pankov wrote:
On Fri, Jan 27, 2012 at 06:58:47PM +0100, Giulio Ferro wrote:
I'm trying to setup a kerberized NFS system made of a server and a
client (both freebsd 9 amd64 stable)
I've tried to follow this howto:
http://code.google.com/p/macnfsv4/wiki/FreeBSD8KerberizedNFSSetup
Giulio Ferro wrote:
I forgot to mentioned that I compiled both servers with
option KGSSAPI and device crypto, and I enabled gssd
on both.
Is there anyone who was able to configure this setup?
I had a server at the nfsv4 testing event last June and it
worked ok. I haven't tried one since
Giulio Ferro wrote:
Thanks everybody again for your help with setting up a working
kerberized nfsv4 system.
I was able to user-mount a nfsv4 share with krb5 security, and I was
trying to do the same as root.
Unfortunately the patch I found here:
Giulio Ferro wrote:
Thanks everybody again for your help with setting up a working
kerberized nfsv4 system.
I was able to user-mount a nfsv4 share with krb5 security, and I was
trying to do the same as root.
Unfortunately the patch I found here:
Gergely CZUCZY wrote:
Hello,
I'd like to ask a few question in order to get some hardware to work
we've got recently.
The hardwares are the following:
- 2x dualport Mellanox ConnectX-3 VPI cards, with 56Gbps ports
- 4 computing modules with a singleport Mellanox MT27500-family
Christopher D. Harrison wrote:
The following reply was made to PR kern/173479; it has been noted by
GNATS.
From: Christopher D. Harrison harri...@biostat.wisc.edu
To: bug-follo...@freebsd.org, j...@cse.yorku.ca
Cc:
Subject: Re: kern/173479: [nfs] chown and chgrp operations fail
between
old and old fashioned, so that's how I remember stuff;-)
On 11/26/12 21:19, Rick Macklem wrote:
Christopher D. Harrison wrote:
The following reply was made to PR kern/173479; it has been noted
by
GNATS.
From: Christopher D. Harrisonharri...@biostat.wisc.edu
To: bug-follo
Ronald F. Guilmette wrote:
In message
689563329.3076797.1361028594307.javamail.r...@erie.cs.uoguelph.ca,
Rick Macklem rmack...@uoguelph.ca wrote:
Ronald F. Guilmette wrote:
nfs_server_flags=-h 192.168.1.2
Add -t to these flags. It appears that the default is UDP only.
YE! Thank
Garrett Wollman wrote:
On Fri, 08 Mar 2013 08:54:14 +0100, Andre Oppermann
an...@freebsd.org said:
[stuff I wrote deleted]
You have an amd64 kernel running HEAD or 9.x?
Yes, these are 9.1 with some patches to reduce mutex contention on the
NFS server's replay cache.
The cached
Garrett Wollman wrote:
On Fri, 8 Mar 2013 19:47:13 -0500 (EST), Rick Macklem
rmack...@uoguelph.ca said:
The cached replies are copies of the mbuf list done via m_copym().
As such, the clusters in these replies won't be free'd (ref cnt -
0)
until the cache is trimmed (nfsrv_trimcache
Garrett Wollman wrote:
On Fri, 8 Mar 2013 19:47:13 -0500 (EST), Rick Macklem
rmack...@uoguelph.ca said:
If reducing the size to 4K doesn't fix the problem, you might want
to
consider shrinking the tunable vfs.nfsd.tcphighwater and suffering
the increased CPU overhead (and some
Garett Wollman wrote:
In article 20795.29370.194678.963...@hergotha.csail.mit.edu, I
wrote:
On Sat, 9 Mar 2013 11:50:30 -0500 (EST), Rick Macklem
rmack...@uoguelph.ca said:
I've thought about this. My concern is that the separate thread
might
not keep up with the trimming demand
Garrett Wollman wrote:
On Sat, 9 Mar 2013 11:27:32 -0500 (EST), Rick Macklem
rmack...@uoguelph.ca said:
around the highwater mark basically indicates this is working. If it
wasn't
throwing away replies where the receipt has been ack'd at the TCP
level, the cache would grow very large
Garrett Wollman wrote:
On Sat, 9 Mar 2013 11:50:30 -0500 (EST), Rick Macklem
rmack...@uoguelph.ca said:
I suspect this indicates that it isn't mutex contention, since the
threads would block waiting for the mutex for that case, I think?
No, because our mutexes are adaptive, so each
Andre Oppermann wrote:
On 10.03.2013 03:22, Rick Macklem wrote:
Garett Wollman wrote:
Also, it occurs to me that this strategy is subject to livelock. To
put backpressure on the clients, it is far better to get them to
stop
sending (by advertising a small receive window) than to accept
Andre Oppermann wrote:
On 10.03.2013 07:04, Garrett Wollman wrote:
On Fri, 8 Mar 2013 12:13:28 -0800, Jack Vogel jfvo...@gmail.com
said:
Yes, in the past the code was in this form, it should work fine
Garrett,
just make sure
the 4K pool is large enough.
[Andre Oppermann's
Andre Oppermann wrote:
On 09.03.2013 01:47, Rick Macklem wrote:
Garrett Wollman wrote:
On Fri, 08 Mar 2013 08:54:14 +0100, Andre Oppermann
an...@freebsd.org said:
[stuff I wrote deleted]
You have an amd64 kernel running HEAD or 9.x?
Yes, these are 9.1 with some patches to reduce
Andre Oppermann wrote:
On 11.03.2013 17:05, Garrett Wollman wrote:
In article 513db550.5010...@freebsd.org, an...@freebsd.org writes:
Garrett's problem is receive side specific and NFS can't do much
about it.
Unless, of course, NFS is holding on to received mbufs for a longer
time.
Garrett Wollman wrote:
In article 513db550.5010...@freebsd.org, an...@freebsd.org writes:
Garrett's problem is receive side specific and NFS can't do much
about it.
Unless, of course, NFS is holding on to received mbufs for a longer
time.
The NFS server only holds onto receive mbufs until
Garrett Wollman wrote:
On Mon, 11 Mar 2013 21:25:45 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
To be honest, I'd consider seeing a lot of non-empty receive queues
for TCP connections to the NFS server to be an indication that it is
near/at its load limit. (Sure, if you do
Garrett Wollman wrote:
On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
Basically, this patch:
- allows setting of the tcp timeout via vfs.nfsd.tcpcachetimeo
(I'd suggest you go down to a few minutes instead of 12hrs)
- allows TCP caching
Garrett Wollman wrote:
On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
I've attached a patch that has assorted changes.
So I've done some preliminary testing on a slightly modified form of
this patch, and it appears to have no major issues. However, I'm
I wrote:
Garrett Wollman wrote:
On Tue, 12 Mar 2013 23:48:00 -0400 (EDT), Rick Macklem
rmack...@uoguelph.ca said:
I've attached a patch that has assorted changes.
So I've done some preliminary testing on a slightly modified form of
this patch, and it appears to have no major issues
Ronald has reported having a problem with the FreeBSD NFS client using
8.2-prerelease. I've redirected it here, since it looks like there is
a TCP/IP issue that is causing it.
These are the links to the dumps:
http://klop.ws/~ronald/nfs-problem/procstat.nolockd
Hi,
I think you need the following patch (which will be in 8.2):
http://people.freebsd.org/~rmacklem/krpc.patch
If you try the patch, please let us know if it resolves your
problem.
rick
___
freebsd-net@freebsd.org mailing list
vfs.nfsrv.nfs_privport controls wither or not NFS enforces the
traditional RPC semantics that require that requests come from
privileged ports. By default this check is disabled. Hardening
guides typically suggest this be enabled, usually via the rc.conf knob
nfs_reserved_port_only=YES.
I just setup an nfs mount between two servers ...
ServerA, nfsd on 192.168.1.8
ServerB, nfs client on 192.168.1.7
I have a jail, ServerC, running on 192.168.1.7 ... most operations
appear
to work, but it looks like 'special files' of a sort aren't working,
for
when I try and startup
I just setup an nfs mount between two servers ...
ServerA, nfsd on 192.168.1.8
ServerB, nfs client on 192.168.1.7
I have a jail, ServerC, running on 192.168.1.7 ... most operations
appear
to work, but it looks like 'special files' of a sort aren't working,
for
when I try and
1 - 100 of 265 matches
Mail list logo