On Thu, 2015-03-26 at 14:01 +0900, Toshiaki Makita wrote:
To allow drivers to handle the features check for multiple tags,
move the check to ndo_features_check().
As no drivers currently handle multiple tagged TSO, introduce
dflt_features_check() and call it if the driver does not have
On Thu, 2015-03-26 at 14:01 +0900, Toshiaki Makita wrote:
Separate the two checks for single vlan and multiple vlans in
netif_skb_features(). This allows us to move the check for multiple
vlans to another function later.
Signed-off-by: Toshiaki Makita makita.toshi...@lab.ntt.co.jp
---
On Wed, 2015-02-25 at 11:05 +0100, Sabrina Dubroca wrote:
There is a race condition between e1000_change_mtu's cleanups and
netpoll, when we change the MTU across jumbo size:
...
Fixes: edbbb3ca1077 (e1000: implement jumbo receive with partial
descriptors)
Signed-off-by: Sabrina Dubroca
On Thu, 2015-01-15 at 11:11 +0100, Thomas Jarosch wrote:
On Wednesday, 14. January 2015 09:20:52 Eric Dumazet wrote:
I would try to use lower data per txd. I am not sure 24KB is really
supported.
( check commit d821a4c4d11ad160925dab2bb009b8444beff484 for details)
diff --git
On Thu, 2015-01-15 at 16:48 +0100, Thomas Jarosch wrote:
On Thursday, 15. January 2015 07:25:32 Eric Dumazet wrote:
On Thu, 2015-01-15 at 15:58 +0100, Thomas Jarosch wrote:
A colleague mentioned to me he saw the Hardware Unit Hang message
every
few days even running on kernel 3.4
On Thu, 2015-01-15 at 15:58 +0100, Thomas Jarosch wrote:
A colleague mentioned to me he saw the Hardware Unit Hang message every
few days even running on kernel 3.4 (without your patch). Basically I'm
testing now if that's still the case with 3.19-rc4+ or not.
I'm all for fixing the root
-unit-hang
I have a test setup that can trigger the problem within seconds
and bisected it down to this commit (hi Eric!):
---
commit 69b08f62e17439ee3d436faf0b9a7ca6fffb78db
Author: Eric Dumazet eduma...@google.com
Date: Wed Sep
From: Eric Dumazet eduma...@google.com
Do not reuse skb if it was pfmemalloc tainted, otherwise
future frame might be dropped anyway.
Signed-off-by: Eric Dumazet eduma...@google.com
---
net/core/dev.c |4
1 file changed, 4 insertions(+)
diff --git a/net/core/dev.c b/net/core/dev.c
On Wed, 2014-10-22 at 17:50 +0400, Roman Gushchin wrote:
Incoming packet is dropped silently by sk_filter(), if the skb was
allocated from pfmemalloc reserves and the corresponding socket is
not marked with the SOCK_MEMALLOC flag.
Igb driver allocates pages for DMA with __skb_alloc_page(),
On Tue, 2014-05-13 at 11:33 +0800, xuyongjia...@gmail.com wrote:
From: Yongjian Xu xuyongjia...@gmail.com
skb-len is unsigned int, skb-len = 0 should be (int)(skb-len) = 0.
Why ?
Did you hit a case skb-len would be 0 or 'negative' ?
This test makes no sense, just remove it.
On Thu, 2014-04-03 at 21:36 -0700, Ben Greear wrote:
I don't know any off-the-shelf software that supports setting TCP MSS,
but maybe iperf or similar can either do it now or can be modified
to make it support that feature.
If you don't have anything that reproduces this easily, let me know
On Wed, 2013-08-21 at 13:39 +0300, Eliezer Tamir wrote:
Instead of remembering the napi_id for all the sockets in an epoll,
we only track the first socket we see with any unique napi_id.
The rational for this is that while there may be many thousands of
sockets tracked by a single epoll, we
On Wed, 2013-07-10 at 11:53 +0100, Sam Crawford wrote:
Thanks Eric! I've adapted this to the following:
ETH=eth1
EST=est 1sec 4sec
BUCKETS=64
RATE=100Mbit
tc qd del dev $ETH root 2/dev/null
tc qdisc add dev $ETH root handle 8000: $EST htb r2q 1000 default 8000
for i in $(
On Tue, 2013-07-09 at 14:57 +0100, Sam Crawford wrote:
Hi all,
This issue persists unfortunately. Attached is a log from an instrumented
TCP server (the sender), logging CWND values and the retransmits. This has
been run on two identical servers on the same switch - one at 100Mbit and
the
On Tue, 2013-07-09 at 15:53 +0100, Sam Crawford wrote:
I've tried dropping the qlen down (even to zero), but to no effect. Is
this still expected? I've got total control over both client and
server components, so can change pretty much anything to improve
matters.
As the bottleneck is not
On Tue, 2013-07-09 at 16:58 +0100, Sam Crawford wrote:
Thanks very much! One quick kernel upgrade later (to add fq_codel
support) and that has definitely helped. I'll run a larger set of
tests and report back.
One final question: I understand that this applies a 100Mbit aggregate
shaper
On Wed, 2013-06-19 at 13:04 +0300, Eliezer Tamir wrote:
One question: do we need in sock_poll() to test that sock-sk is not null?
(Thanks to Willem de Bruijn for pointing this out.)
Why sock-sk could be NULL in sock_poll() ?
This is the thing I am not sure [1]
Normally, sock_poll() should
On Tue, 2013-06-18 at 11:58 +0300, Eliezer Tamir wrote:
select/poll busy-poll support.
*/
-static inline u64 ll_end_time(struct sock *sk)
+static inline u64 ll_sk_end_time(struct sock *sk)
{
- u64 end_time = ACCESS_ONCE(sk-sk_ll_usec);
-
- /* we don't mind a ~2.5%
On Tue, 2013-06-18 at 11:58 +0300, Eliezer Tamir wrote:
select/poll busy-poll support.
Add a new poll flag POLL_LL. When this flag is set, sock poll will call
sk_poll_ll() if possible. sock_poll sets this flag in its return value
to indicate to select/poll when a socket that can busy poll is
On Tue, 2013-06-18 at 16:25 +0300, Eliezer Tamir wrote:
One other thing,
sock_poll() will only ll_poll if the flag was set _and_ the socket has a
none-zero value in sk-sk_ll_usec so you still only poll on sockets
that were enabled for LLS, not on every socket.
But sockets are default enabled
On Thu, 2013-06-13 at 17:46 +0300, Eliezer Tamir wrote:
There is no reason for sysctl_net_ll_poll to be an unsigned long.
Change it into an unsigned int.
Fix the proc handler.
Signed-off-by: Eliezer Tamir eliezer.ta...@linux.intel.com
---
include/net/ll_poll.h |2 +-
On Fri, 2013-06-14 at 04:56 +0300, Eliezer Tamir wrote:
There is no reason for sysctl_net_ll_poll to be an unsigned long.
Change it into an unsigned int.
Fix the proc handler.
Signed-off-by: Eliezer Tamir eliezer.ta...@linux.intel.com
---
Acked-by: Eric Dumazet eduma...@google.com
On Fri, 2013-06-14 at 04:57 +0300, Eliezer Tamir wrote:
Use sched_clock() instead of get_cycles().
We can use sched_clock() because we don't care much about accuracy.
Remove the dependency on X86_TSC
Signed-off-by: Eliezer Tamir eliezer.ta...@linux.intel.com
---
-static inline bool
On Wed, 2013-06-12 at 14:20 +0300, Eliezer Tamir wrote:
adds a socket option for low latency polling.
This allows overriding the global sysctl value with a per-socket one.
Signed-off-by: Eliezer Tamir eliezer.ta...@linux.intel.com
---
It seems EXPORT_SYMBOL_GPL(sysctl_net_ll_poll) can now
On Wed, 2013-06-12 at 15:54 +0300, Eliezer Tamir wrote:
On 12/06/2013 15:44, Eric Dumazet wrote:
On Wed, 2013-06-12 at 14:20 +0300, Eliezer Tamir wrote:
adds a socket option for low latency polling.
This allows overriding the global sysctl value with a per-socket one.
Signed-off
On Tue, 2013-06-11 at 09:49 +0300, Eliezer Tamir wrote:
I would like to hear opinions on what needs to be added to make this
feature complete.
The list I have so far is:
1. add a socket option
Yes, please. I do not believe all sockets on the machine are candidate
for low latency. In fact
On Tue, 2013-06-11 at 17:24 +0300, Eliezer Tamir wrote:
adds a socket option for low latency polling.
This allows overriding the global sysctl value with a per-socket one.
Signed-off-by: Eliezer Tamir eliezer.ta...@linux.intel.com
---
-static inline cycles_t ll_end_time(void)
+static
On Tue, 2013-06-11 at 18:37 +0300, Eliezer Tamir wrote:
On 11/06/2013 17:45, Eric Dumazet wrote:
On Tue, 2013-06-11 at 17:24 +0300, Eliezer Tamir wrote:
adds a socket option for low latency polling.
This allows overriding the global sysctl value with a per-socket one.
Signed-off
On Mon, 2013-06-10 at 11:39 +0300, Eliezer Tamir wrote:
Adds a napi_id and a hashing mechanism to lookup a napi by id.
This will be used by subsequent patches to implement low latency
Ethernet device polling.
Based on a code sample by Eric Dumazet.
Signed-off-by: Eliezer Tamir eliezer.ta
to add busy-poll support to more protocols.
Signed-off-by: Alexander Duyck alexander.h.du...@intel.com
Signed-off-by: Jesse Brandeburg jesse.brandeb...@intel.com
Signed-off-by: Eliezer Tamir eliezer.ta...@linux.intel.com
---
Acked-by: Eric Dumazet eduma...@google.com
/ll_poll.h
Acked-by: Eric Dumazet eduma...@google.com
--
How ServiceNow helps IT people transform IT departments:
1. A cloud service to automate IT design, transition and operations
2. Dashboards that offer high-level views
|4
net/ipv4/udp.c |6 +-
net/ipv6/udp.c |6 +-
3 files changed, 14 insertions(+), 2 deletions(-)
Acked-by: Eric Dumazet eduma...@google.com
--
How ServiceNow helps IT people
netif_napi_add().
Signed-off-by: Alexander Duyck alexander.h.du...@intel.com
Signed-off-by: Jesse Brandeburg jesse.brandeb...@intel.com
Signed-off-by: Eliezer Tamir eliezer.ta...@linux.intel.com
---
Reviewed-by: Eric Dumazet eduma...@google.com
On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
Adds a napi_id and a hashing mechanism to lookup a napi by id.
This will be used by subsequent patches to implement low latency
Ethernet device polling.
Based on a code sample by Eric Dumazet.
Signed-off-by: Eliezer Tamir eliezer.ta
eliezer.ta...@linux.intel.com
---
Are you sure this version was tested by Willem ?
Acked-by: Eric Dumazet eduma...@google.com
--
How ServiceNow helps IT people transform IT departments:
1. A cloud service to automate
...@linux.intel.com
---
Acked-by: Eric Dumazet eduma...@google.com
--
How ServiceNow helps IT people transform IT departments:
1. A cloud service to automate IT design, transition and operations
2. Dashboards that offer
to add busy-poll support to more protocols.
Signed-off-by: Alexander Duyck alexander.h.du...@intel.com
Signed-off-by: Jesse Brandeburg jesse.brandeb...@intel.com
Tested-by: Willem de Bruijn will...@google.com
Signed-off-by: Eliezer Tamir eliezer.ta...@linux.intel.com
---
Acked-by: Eric Dumazet
On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
A very naive select/poll busy-poll support.
Add busy-polling to sock_poll().
When poll/select have nothing to report, call the low-level
sock_poll() again until we are out of time or we find something.
Right now we poll every socket
On Wed, 2013-06-05 at 16:41 +0300, Eliezer Tamir wrote:
On 05/06/2013 16:30, Eric Dumazet wrote:
On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
A very naive select/poll busy-poll support.
Add busy-polling to sock_poll().
When poll/select have nothing to report, call the low-level
On Wed, 2013-06-05 at 14:49 +0100, David Laight wrote:
I am a bit uneasy with this one, because an applicatio polling() on one
thousand file descriptors using select()/poll(), will call sk_poll_ll()
one thousand times.
Anything calling poll() on 1000 fds probably has performance
issues
On Wed, 2013-06-05 at 06:56 -0700, Eric Dumazet wrote:
This looks quite easy, by adding in include/uapi/asm-generic/poll.h
#define POLL_LL 0x8000
And do the sk_poll_ll() call only if flag is set.
I do not think we have to support select(), as its legacy interface, and
people wanting ll
On Wed, 2013-06-05 at 16:41 +0300, Eliezer Tamir wrote:
On 05/06/2013 16:30, Eric Dumazet wrote:
I am a bit uneasy with this one, because an applicatio polling() on one
thousand file descriptors using select()/poll(), will call sk_poll_ll()
one thousand times.
But we call sk_poll_ll
On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
This is probably too big to be inlined, and nonblock should be a bool
It would also make sense to give end_time as a parameter, so that the
polling() code could really give a end_time for the whole duration of
poll().
(You then should
On Wed, 2013-06-05 at 18:30 +0300, Eliezer Tamir wrote:
On 05/06/2013 18:21, Eric Dumazet wrote:
On Wed, 2013-06-05 at 13:34 +0300, Eliezer Tamir wrote:
This is probably too big to be inlined, and nonblock should be a bool
It would also make sense to give end_time as a parameter, so
On Mon, 2013-06-03 at 11:01 +0300, Eliezer Tamir wrote:
Adds a napi_id and a hashing mechanism to lookup a napi by id.
This will be used by subsequent patches to implement low latency
Ethernet device polling.
Based on a code sample by Eric Dumazet.
Signed-off-by: Eliezer Tamir eliezer.ta
...@linux.intel.com
---
Reviewed-by: Eric Dumazet eduma...@google.com
--
Get 100% visibility into Java/.NET code with AppDynamics Lite
It's a free troubleshooting tool designed for production
Get down to code-level
to add busy-poll support to more protocols.
Signed-off-by: Alexander Duyck alexander.h.du...@intel.com
Signed-off-by: Jesse Brandeburg jesse.brandeb...@intel.com
Tested-by: Willem de Bruijn will...@google.com
Signed-off-by: Eliezer Tamir eliezer.ta...@linux.intel.com
---
Reviewed-by: Eric
On Mon, 2013-06-03 at 11:02 +0300, Eliezer Tamir wrote:
A very naive select/poll busy-poll support.
Add busy-polling to sock_poll().
When poll/select have nothing to report, call the low-level
sock_poll() again untill we are out of time or we find something.
Rigth now we poll every socket
On Wed, 2013-05-29 at 09:39 +0300, Eliezer Tamir wrote:
Adds a napi_id and a hashing mechanism to lookup a napi by id.
This will be used by subsequent patches to implement low latency
Ethernet device polling.
Based on a code sample by Eric Dumazet.
Signed-off-by: Eliezer Tamir eliezer.ta
|2 ++
net/ipv6/tcp_ipv6.c |2 ++
3 files changed, 9 insertions(+), 0 deletions(-)
Acked-by: Eric Dumazet eduma...@google.com
--
Introducing AppDynamics Lite, a free troubleshooting tool for Java/.NET
Get 100
On Wed, 2013-05-29 at 14:09 +0100, David Laight wrote:
Adds a napi_id and a hashing mechanism to lookup a napi by id.
Is this one of the places where the 'id' can be selected
so that the 'hash' lookup never collides?
Very few devices will ever call napi_hash_add()
[ Real NIC RX queues,
On Wed, 2013-05-29 at 21:52 +0300, Or Gerlitz wrote:
Eliezer Tamir eliezer.ta...@linux.intel.com wrote:
Or Gerlitz wrote:
Unlike with TCP sockets, UDP sockets may receive packets from multiple
sources and hence the receiving context may be steered to be executed
on different cores
On Tue, 2013-05-28 at 11:03 +0300, Eliezer Tamir wrote:
With an atomic we don't need the RTNL in any of the napi_id functions.
One less thing to worry about when we try to remove the RTNL.
OK but we'll need something to protect the lists against concurrent
insert/deletes.
A spinlock or a
On Mon, 2013-05-27 at 10:44 +0300, Eliezer Tamir wrote:
diff --git a/include/net/sock.h b/include/net/sock.h
index 66772cf..c7c3ea6 100644
--- a/include/net/sock.h
+++ b/include/net/sock.h
@@ -281,6 +281,7 @@ struct cg_proto;
* @sk_error_report: callback to indicate errors (e.g.
On Mon, 2013-05-27 at 10:43 +0300, Eliezer Tamir wrote:
Hello Dave,
There are many small changes from the last time.
The two big changes are:
* Skb and sk now store a napi_id instead of a pointer.
* Very naive poll/select support. There is a dramatic improvement in both
latencey and
On Mon, 2013-05-27 at 10:44 +0300, Eliezer Tamir wrote:
adds busy-poll support for TCP.
Really, this is a small changelog for such an addition :(
How poll()/epoll() is supported ?
Signed-off-by: Alexander Duyck alexander.h.du...@intel.com
Signed-off-by: Jesse Brandeburg
On Tue, 2013-05-21 at 10:28 +0300, Eliezer Tamir wrote:
On 20/05/2013 18:29, Eric Dumazet wrote:
On Mon, 2013-05-20 at 13:16 +0300, Eliezer Tamir wrote:
---
+static inline void skb_mark_ll(struct sk_buff *skb, struct napi_struct
*napi)
+{
+ skb-dev_ref = napi;
+}
+
+static
On Tue, 2013-05-21 at 16:15 +0300, Alex Rosenbaum wrote:
On 5/21/2013 3:29 PM, Eliezer Tamir wrote:
What benchmarks are you using to test poll/select/epoll?
for epoll/select latency tests we are using sockperf as performance
latency tool: https://code.google.com/p/sockperf/
It is a
On Tue, 2013-05-21 at 17:26 +0300, Eliezer Tamir wrote:
+/* should be called when destroying a napi struct */
+static inline void inc_ll_gen_id(void)
+{
+ ll_global_gen_id++;
+}
+
+static inline void skb_mark_ll(struct sk_buff *skb, struct napi_struct *napi)
+{
+ skb-dev_ref =
On Tue, 2013-05-21 at 19:02 +0200, Pekka Riikonen wrote:
Maybe even that's not needed. Couldn't skb-queue_mapping give the
correct NAPI instance in multiqueue nics? The NAPI instance could be made
easily available from skb-dev. In any case an index is much better than
a new pointer.
We
On Tue, 2013-05-21 at 10:48 -0700, Eric Dumazet wrote:
We do not keep skb-dev information once a packet leaves the rcu
protected region.
Once packet is queued to tcp input queues, skb-dev is NULL.
This is done in tcp_v4_rcv() tcp_v6_rcv
On Tue, 2013-05-21 at 22:25 +0300, Eliezer Tamir wrote:
On 21/05/2013 20:51, Eric Dumazet wrote:
On Tue, 2013-05-21 at 10:48 -0700, Eric Dumazet wrote:
We do not keep skb-dev information once a packet leaves the rcu
protected region.
Once packet is queued to tcp input queues, skb-dev
On Mon, 2013-05-20 at 13:16 +0300, Eliezer Tamir wrote:
+config INET_LL_TCP_POLL
+ bool Low Latency TCP Receive Poll
+ depends on INET_LL_RX_POLL
+ default n
+ ---help---
+ TCP support for Low Latency TCP Queue Poll.
+ (For network cards that support this
On Mon, 2013-05-20 at 13:16 +0300, Eliezer Tamir wrote:
Adds a new ndo_ll_poll method and the code that supports and uses it.
This method can be used by low latency applications to busy poll ethernet
device queues directly from the socket code. The ip_low_latency_poll sysctl
entry controls how
On Thu, 2013-04-18 at 02:42 +, Xavier Trilla wrote:
Hi Eric,
We have been doing some more research, and finally we managed to get a
fast kernel with a new minimal configuration using the last release of
the 2.6.32 (2.6.32.60). We compiled what we think is a kernel with
options that
On Tue, 2013-03-05 at 17:28 +, Ben Hutchings wrote:
On Tue, 2013-03-05 at 17:26 +, Ben Hutchings wrote:
On Wed, 2013-02-27 at 09:56 -0800, Eliezer Tamir wrote:
Add the ixgbe driver code implementing ndo_ll_poll.
It should be easy for other drivers to do something similar
in
On Mon, 2013-03-04 at 10:43 +0200, Eliezer Tamir wrote:
One could for example increment the generation id every time the RTNL is
taken. or is this too much?
RTNL is taken for a lot of operations, it would be better to have a
finer grained increment.
On Mon, 2013-03-04 at 17:28 +0200, Eliezer Tamir wrote:
On 04/03/2013 16:52, Eric Dumazet wrote:
On Mon, 2013-03-04 at 10:43 +0200, Eliezer Tamir wrote:
One could for example increment the generation id every time the RTNL is
taken. or is this too much?
RTNL is taken for a lot
On Wed, 2013-02-27 at 09:55 -0800, Eliezer Tamir wrote:
index 821c7f4..d1d1016 100644
--- a/include/linux/skbuff.h
+++ b/include/linux/skbuff.h
@@ -408,6 +408,10 @@ struct sk_buff {
struct sock *sk;
struct net_device *dev;
+#ifdef CONFIG_INET_LL_RX_POLL
+
On Sun, 2013-03-03 at 20:21 +0100, Andi Kleen wrote:
Alternative to 2) would be to use a generation id, incremented every
time a napi used in spin polling enabled driver is dismantled (and freed
after RCU grace period)
And store in sockets not only the pointer to napi_struct, but the
On Wed, 2013-02-27 at 09:56 -0800, Eliezer Tamir wrote:
@@ -1458,7 +1459,9 @@ static void ixgbe_rx_skb(struct ixgbe_q_vector
*q_vector,
{
struct ixgbe_adapter *adapter = q_vector-adapter;
- if (!(adapter-flags IXGBE_FLAG_IN_NETPOLL))
+ if (ixgbe_qv_ll_polling(q_vector))
On Wed, 2013-02-20 at 10:16 -0800, Alexander Duyck wrote:
The problem is the 256 byte alignment for L1_CACHE_BYTES is increasing
the size of the data and shared info significantly pushing us past the
2K limit.
I'll look into this since it likely affects ixgbe as well.
Thats what I said.
On Wed, 2013-02-20 at 13:23 -0800, Alexander Duyck wrote:
NET_SKB_PAD is defined for the s390. It is already 32. If you look it
up we only have 2 definitions for NET_SKB_PAD, one specific to the s390
architecture and the other one in skbuff.h.
Andrew traces disagree, as they were :
s390
On Wed, 2013-02-20 at 14:47 -0800, Alexander Duyck wrote:
Huh? I'm not seeing what you are saying. The NET_SKB_PAD is the value
that is in the last set of parenthesis since it was:
(NET_SKB_PAD + NET_IP_ALIGN + IGB_TS_HDR_LEN + ETH_FRAME_LEN + ETH_FCS_LEN)
that is the bit that became:
On Tue, Feb 19, 2013 at 2:30 PM, Allan, Bruce W bruce.w.al...@intel.com wrote:
-Original Message-
From: Andrew Morton [mailto:a...@linux-foundation.org]
Sent: Tuesday, February 19, 2013 2:27 PM
To: Wu, Fengguang
Cc: Daniel Santos; Kirsher, Jeffrey T; Brandeburg, Jesse; Allan, Bruce W;
On Thu, 2013-02-07 at 01:02 +0200, Michael S. Tsirkin wrote:
qlcnic set gso_size but not gso type. This leads to crashes
in macvtap.
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
This one I only compiled - don't have qlogic hardware.
On Thu, 2013-02-07 at 01:02 +0200, Michael S. Tsirkin wrote:
ixgbe set gso_size but not gso_type. This leads to
crashes in macvtap.
Signed-off-by: Michael S. Tsirkin m...@redhat.com
---
I tested that this fixes the crash for me. I am told on ixgbe LRO only
triggers with TCP so checking
On Thu, 2012-10-18 at 20:55 -0700, Joe Perches wrote:
ethernet, ipv4, and ipv6 address testing uses 3 different api naming styles.
ethernet uses:is_foo_ether_addr
ipv4 uses:ipv4_is_foo
ipv6 uses:ipv6_addr_foo
Standardize on the ipv6 style of prefix_addr_type to reduce
the
On Tue, 2012-10-09 at 10:36 -0700, Jesse Brandeburg wrote:
Jesse did not share any performance numbers with me, I am sure he can
give some background tomorrow when he is back online.
I am working on an alternative patch now and should have something to
share tomorrow.
On Thu, 2012-10-04 at 15:40 +0200, Dick Snippe wrote:
On Tue, Sep 18, 2012 at 12:55:02PM +0200, Dick Snippe wrote:
FYI:
For our production platform I will try some experiments with decreased
txqueuelen, binding (web)server instances to specific cores ad boot
a server with kernel 3.5 +
On Mon, 2012-09-10 at 07:02 +0200, Jan Engelhardt wrote:
On Monday 2012-09-03 00:53, Eric Dumazet wrote:
[PATCH] xt_LOG: take care of timewait sockets
Sami Farin reported crashes in xt_LOG because it assumes skb-sk is a
full blown socket.
But with TCP early demux, we can have skb-sk
From: Eric Dumazet eduma...@google.com
On Mon, 2012-09-03 at 09:47 +0200, Florian Westphal wrote:
Eric Dumazet eric.duma...@gmail.com wrote:
Sami Farin reported crashes in xt_LOG because it assumes skb-sk is a
full blown socket.
But with TCP early demux, we can have skb-sk pointing
From: Eric Dumazet eduma...@google.com
On Sun, 2012-09-02 at 15:28 +0200, Florian Westphal wrote:
Sami Farin hvtaifwkbgefb...@gmail.com wrote:
I get this panic every 1-2 days.
Also with 7a611e69b26069a511d9d5251c6a28af6c521121 (commit before
3.6.0-rc4).
Could you please post iptables
On Mon, 2012-09-03 at 00:53 +0200, Eric Dumazet wrote:
First I thought changing demux to not do the lookup of TIMEWAIT slot in
__inet_lookup_established(), then it sounds not optimal to redo the full
lookup for ESTABLISHED sockets later in TCP stack.
So it seems we should fix xt_LOG instead
...@gmail.com
Signed-off-by: Eric Dumazet eduma...@google.com
---
net/netfilter/nfnetlink_log.c | 14 +++--
net/netfilter/xt_LOG.c| 33
2 files changed, 25 insertions(+), 22 deletions(-)
diff --git a/net/netfilter/nfnetlink_log.c b/net/netfilter
On Thu, 2012-08-23 at 12:49 +0200, Michel Benoit wrote:
Hi
Thanks for the quick responses.
I disabled BQL with:
# echo max /sys/class/net/eth0/queues/tx-0/byte_queue_limits/limit_min
Is that the correct way to turn it off?
Yep, so thats rule out this particular problem, at least.
On Thu, 2012-08-23 at 13:44 +0200, Michel Benoit wrote:
The simulator timing is hardware accurate so the problem is most
likely not related to delays.
I actually suspect that the simulator is running too fast and that
there is something that occurs faster in the simulated model than in
real
On Thu, 2012-08-23 at 13:58 +0200, Eric Dumazet wrote:
Maybe there is a bug in TSO handling in e1000e, timing issue of some
sort, or missing memory barrier.
Maybe the simulated hardware starts the transmit while not all frags
were properly posted.
Check e1000_tx_queue()
by the way what
On Tue, 2012-07-24 at 22:58 +0200, Eric Dumazet wrote:
Its a single line change in net/core/flow_dissector.c
Well, its a bit more than one line ;)
diff --git a/include/net/flow_keys.h b/include/net/flow_keys.h
index 80461c1..b5bae21 100644
--- a/include/net/flow_keys.h
+++ b/include/net
On Mon, 2012-07-09 at 16:51 +0800, Joe Jin wrote:
Hi list,
I'm seeing a Unit Hang even with the latest e1000e driver 2.0.0 when doing
scp test. this issue is easy do reproduced on SUN FIRE X2270 M2, just copy
a big file (500M) from another server will hit it at once.
Would you please
On Tue, 2012-05-29 at 23:25 +0900, Hiroaki SHIMODA wrote:
If I understand the code and spec correctly, TX interrupts are
generated when TXDCTL.WTHRESH descriptors have been accumulated
and write backed.
I tentatively changed the TXDCTL.WTHRESH to 1, then it seems
that latency spikes are
On Wed, 2012-05-30 at 13:04 +0200, Eric Dumazet wrote:
Please don't use lazy volatile.
Unless you really want Linus flames (and ours)
ACCESS_ONCE() is much cleaner.
http://yarchive.net/comp/linux/ACCESS_ONCE.html
On Wed, 2012-05-30 at 04:20 -0700, Joe Perches wrote:
On Wed, 2012-05-30 at 13:08 +0200, Eric Dumazet wrote:
Maybe we should change all POSDIFF(), not selected ones.
#define POSDIFF(A, B) ((int)((A) - (B)) 0 ? (A) - (B) : 0)
maybe use an eval once statement expression macro
On Wed, 2012-05-30 at 07:09 -0700, Joe Perches wrote:
Whatever evals A and B once would be better.
Why do you believe they could be evaluated several time ?
#define POSDIFF(A, B) max_t(int, (A) - (B), 0)
(A) - (B) is done once, or its a huge max_t() bug.
By the way, we handle 32bit values
On Wed, 2012-05-30 at 20:29 +0900, Hiroaki SHIMODA wrote:
On Wed, 30 May 2012 13:08:27 +0200
Eric Dumazet eric.duma...@gmail.com wrote:
On Wed, 2012-05-30 at 19:43 +0900, Hiroaki SHIMODA wrote:
While examining ping problem, below pattern is often observed
On Tue, 2012-05-29 at 07:54 -0700, Tom Herbert wrote:
Thanks Hiroaki for this description, it looks promising. Denys, can
you test with his patch.
Tom
Indeed this sounds good.
Hmm, I guess my e1000e has no FLAG2_DMA_BURST in adapter-flags2
On Mon, 2012-05-21 at 11:06 +0300, Denys Fedoryshchenko wrote:
Not sure it is a lot of time, after all it is 2 x core quad machine,
should be enough fast for pings.
It will cause stalls on small packets even more seems.
Tested latest git, net-next, still the same, stalls.
hardware latency
On Mon, 2012-05-21 at 10:30 +0200, Eric Dumazet wrote:
On Mon, 2012-05-21 at 11:06 +0300, Denys Fedoryshchenko wrote:
Not sure it is a lot of time, after all it is 2 x core quad machine,
should be enough fast for pings.
It will cause stalls on small packets even more seems.
Tested
On Fri, 2012-05-18 at 17:04 +0300, Denys Fedoryshchenko wrote:
It seems logic in BQL has serious issues. The most bad thing, if
and it is empty. So in result, instead of eliminating latency, it is
adding it.
There is maybe a misunderstanding here. BQL by itself only reduce amount
of
On Sun, 2012-05-20 at 22:18 +0300, Denys Fedoryshchenko wrote:
On 2012-05-20 22:07, Eric Dumazet wrote:
You could try latencytop, I am not sure if some obvious things will
popup.
For sure i did. Nothing unusual here, max 5ms latency
Cause
1 - 100 of 122 matches
Mail list logo