[snip]
So, hm, the thing that comes to mind is the flowid. What's the various
flowid's for flows? Are they all mapping to CPU 3 somehow?
-a
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe,
Hi.
Can someone explain me where are the 4 missing bytes when capturing
traffic on a gif interface with a tcpdump ?
I expect to see the length of the first fragment (offset = 0) to be
equal to an mtu (1280 bytes), but clearly it's 1276 bytes.
Same thing happens to a gre tunnel.
# ifconfig gif0
Good day, gurus!
We have a servers on the FreeBSD. They do NAT, shaping and traffic
accounting for our home (mainly) customers.
NAT realized with pf nat, shaping with ipfw dummynet and traffic
accounting with ng_netflow via ipfw ng_tee.
The problem is performance on (relatively) high traffic.
On
Hi,
since folks are playing with Midori's DCTCP patch, I wanted to make sure that
you were also aware of the patches that Aris did for PRR and NewCWV...
Lars
On 2014-2-4, at 10:38, Eggert, Lars l...@netapp.com wrote:
Hi,
below are two patches that implement RFC6937 (Proportional Rate
Hi,
I had similar problem on the past and it turned to be the ammount of rules
in ipfe.
Using reduced subset with tables actually reduced the load.
Sami
בתאריך יום שישי, 11 באפריל 2014, Dennis Yusupoff d...@smartspb.net כתב:
Good day, gurus!
We have a servers on the FreeBSD. They do NAT,
On Fri, Apr 11, 2014 at 4:15 AM, Eggert, Lars l...@netapp.com wrote:
Hi,
since folks are playing with Midori's DCTCP patch, I wanted to make sure that
you were also aware of the patches that Aris did for PRR and NewCWV...
prr.patchnewcwv.patch
Lars,
There are no actual patches attached
On Fri, Apr 11, 2014 at 2:48 AM, Adrian Chadd adr...@freebsd.org wrote:
[snip]
So, hm, the thing that comes to mind is the flowid. What's the various
flowid's for flows? Are they all mapping to CPU 3 somehow
The output of netstat -Q shows IP dispatch is set to default, which is
direct
disclaimer: I'm not looking at the code now.. I want to go to bed: :-)
When I wrote that code, the idea was that even a direct node execution
should become a queuing operation if there was already something else
on the queue. so in that model packets were not supposed to get
re-ordered.
Well, ethernet drivers nowdays seem to be doing:
* always queue
* then pop the head item off the queue and transmit that.
-a
On 11 April 2014 11:59, Julian Elischer jul...@freebsd.org wrote:
disclaimer: I'm not looking at the code now.. I want to go to bed: :-)
When I wrote that code, the
On Fri, Apr 11, 2014 at 4:16 PM, hiren panchasara
hiren.panchas...@gmail.com wrote:
On Fri, Apr 11, 2014 at 4:15 AM, Eggert, Lars l...@netapp.com wrote:
Hi,
since folks are playing with Midori's DCTCP patch, I wanted to make sure
that you were also aware of the patches that Aris did for PRR
On Fri, Apr 11, 2014 at 11:30 AM, Patrick Kelsey kel...@ieee.org wrote:
The output of netstat -Q shows IP dispatch is set to default, which is
direct (NETISR_DISPATCH_DIRECT). That means each IP packet will be
processed on the same CPU that the Ethernet processing for that packet was
On Fri, Apr 11, 2014 at 8:23 PM, hiren panchasara
hiren.panchas...@gmail.com wrote:
On Fri, Apr 11, 2014 at 11:30 AM, Patrick Kelsey kel...@ieee.org wrote:
The output of netstat -Q shows IP dispatch is set to default, which is
direct (NETISR_DISPATCH_DIRECT). That means each IP packet
12 matches
Mail list logo