Herbert Xu wrote:
On Fri, Jun 08, 2007 at 02:02:27PM +0300, Baruch Even wrote:
As far as IGMP and multicast handling everything works, the packets are
even forwarded over the ppp links but they arrive to the client with a
bad checksum. I don't have the trace in front of me but I believe
Herbert Xu wrote:
Baruch Even [EMAIL PROTECTED] wrote:
I have a machine on which I have an applications that sends multicast
through eth interface with hardware tx checksum enabled. On the same
machine I have mrouted running that routes the multicast traffic to a
set of ppp interfaces
Baruch Even wrote:
Herbert Xu wrote:
On Fri, Jun 08, 2007 at 02:02:27PM +0300, Baruch Even wrote:
As far as IGMP and multicast handling everything works, the packets
are even forwarded over the ppp links but they arrive to the client
with a bad checksum. I don't have the trace in front of me
Hello,
I have a machine on which I have an applications that sends multicast
through eth interface with hardware tx checksum enabled. On the same
machine I have mrouted running that routes the multicast traffic to a
set of ppp interfaces. The packets that are received by the client have
* Ilpo J?rvinen [EMAIL PROTECTED] [070527 14:16]:
On Sun, 27 May 2007, David Miller wrote:
From: Ilpo_J?rvinen [EMAIL PROTECTED]
Date: Sun, 27 May 2007 10:58:27 +0300 (EEST)
While you're in the right context (reviewing patch 8), you could also
look if tcp_clean_rtx_queue does a
The bridge cleanup timer is fired 10 times a second for timers that are
at least 15 seconds ahead in time and that are not critical to be
cleaned asap.
This patch calculates the next time to run the timer as the minimum of
all timers or a minimum based on the current state.
Signed-Off-By: Baruch
Ilpo Järvinen wrote:
Signed-off-by: Ilpo Järvinen [EMAIL PROTECTED]
---
Documentation/networking/ip-sysctl.txt | 13 +
1 files changed, 13 insertions(+), 0 deletions(-)
diff --git a/Documentation/networking/ip-sysctl.txt
b/Documentation/networking/ip-sysctl.txt
index
* [EMAIL PROTECTED] [EMAIL PROTECTED] [070404 18:29]:
Hello,
We are currently using both 1 Gb 10 Gb links, that interconnect several
servers that are very *local* to
each other.
Typical RTT times range from 0.2 ms - 0.3 ms.
We are currently using TCP reno - is there a more suitable
* [EMAIL PROTECTED] [EMAIL PROTECTED] [070404 19:03]:
Thanks - so you are suggesting we enable 802.3 flow-control / pause-frames?
(it's currently disabled)
I do, but do test it before you bet on it. I've never tested such a
scenario but from my experience the lower the rtt the lesser are the
* Zaccomer Lajos [EMAIL PROTECTED] [070306 17:39]:
Hi,
I'm playing around with a simulation, in which many thousands of IP
addresses (on interface aliases) are used to send/receive TCP/UDP
packets. I noticed that the time of send/sendto increased linearly
with the number of file
skb TCP will
mark it as LOST too. The algorithm uses some ideas presented by
David Miller and Baruch Even.
Seqno lookups require fast lookups that are provided using
RB-tree patch(+abstraction) from DaveM.
Signed-off-by: Ilpo J?rvinen [EMAIL PROTECTED]
---
I'm sorry about poorly chunked
* David Miller [EMAIL PROTECTED] [070306 23:47]:
From: Baruch Even [EMAIL PROTECTED]
Date: Tue, 6 Mar 2007 21:42:59 +0200
* Ilpo J?rvinen [EMAIL PROTECTED] [070306 14:52]:
+ newtp-highest_sack = treq-snt_isn + 1;
That's the only initialization that you have for highest_sack
* David Miller [EMAIL PROTECTED] [070303 08:22]:
BTW, I think I figured out a way to get rid of
lost_{skb,cnt}_hint. The fact of the matter in this case is that
the setting of the tag bits always propagates from front of the queue
onward. We don't get holes mid-way.
So what we can do is
* David Miller [EMAIL PROTECTED] [070228 21:49]:
commit 71b270d966cd42e29eabcd39434c4ad4d33aa2be
Author: David S. Miller [EMAIL PROTECTED]
Date: Tue Feb 27 19:28:07 2007 -0800
[TCP]: Kill fastpath_{skb,cnt}_hint.
Now that we have per-skb fack_counts and an interval
Correct dead/indirect links in net/ipv4/Kconfig
Signed-Off-By: Baruch Even [EMAIL PROTECTED]
Index: 2.6-gt/net/ipv4/Kconfig
===
--- 2.6-gt.orig/net/ipv4/Kconfig2007-02-17 15:47:41.0 +0200
+++ 2.6-gt/net/ipv4/Kconfig
Fix bug #6216, update the link for CONFIG_IP_MCAST help message. The bug with
the proposed fix was submitted by [EMAIL PROTECTED]
Correct other dead/indirect links in the same file.
Signed-Off-By: Baruch Even [EMAIL PROTECTED]
Index: 2.6-gt/net/ipv4/Kconfig
Comtrol Hostess SV-11 driver uses features from INET but doesn't depend on it.
The simple solution is to make it depend on INET as happens for the sealevel
driver.
Fixes bug #7930.
Signed-Off-By: Baruch Even [EMAIL PROTECTED]
Index: 2.6-gt/drivers/net/wan/Kconfig
* David Miller [EMAIL PROTECTED] [070213 00:53]:
From: Baruch Even [EMAIL PROTECTED]
Date: Tue, 13 Feb 2007 00:12:41 +0200
The problem is that you actually put a mostly untested algorithm as the
default for everyone to use. The BIC example is important, it was the
default algorithm
* SANGTAE HA [EMAIL PROTECTED] [070213 18:50]:
Hi Baruch,
I would like to add some comments on your argument.
On 2/13/07, Baruch Even [EMAIL PROTECTED] wrote:
* David Miller [EMAIL PROTECTED] [070213 00:53]:
From: Baruch Even [EMAIL PROTECTED]
Date: Tue, 13 Feb 2007 00:12:41 +0200
* Injong Rhee [EMAIL PROTECTED] [070213 19:43]:
On Feb 13, 2007, at 4:56 AM, Baruch Even wrote:
According to claims of Doug Leith the cubic algorithm that is in the
kernel is different from what was proposed and tested. That's an
important issue which is deflected by personal attacks
* David Miller [EMAIL PROTECTED] [070213 21:56]:
From: Baruch Even [EMAIL PROTECTED]
Date: Tue, 13 Feb 2007 11:56:13 +0200
Do you still think that making Cubic the default is a good idea?
Can you propose a better alternative other than Reno?
The only other option would be HS-TCP
* Stephen Hemminger [EMAIL PROTECTED] [070212 18:04]:
The TCP Vegas implementation is buggy, and BIC is too agressive
so they should not be in the default list. Westwood is okay, but
not well tested.
Since no one really agrees on the relative merits and problems of the
different algorithms and
* David Miller [EMAIL PROTECTED] [070212 22:21]:
From: Baruch Even [EMAIL PROTECTED]
Date: Mon, 12 Feb 2007 21:11:01 +0200
Since no one really agrees on the relative merits and problems of the
different algorithms and since the users themselves dont know, dont care
and have no clue
* Parag Warudkar [EMAIL PROTECTED] [070205 00:57]:
On 2/4/07, Parag Warudkar [EMAIL PROTECTED] wrote:
I am running 2.6.20 and have trouble with stalled connections. For
instance, if I try to download a debian ISO image using wget, the
connection runs fine for few seconds and then stalls for
* David Miller [EMAIL PROTECTED] [070131 22:52]:
From: Baruch Even [EMAIL PROTECTED]
Date: Mon, 29 Jan 2007 09:13:49 +0200
When we check for SACK fast path make sure that we also have the same
number of
SACK blocks in the cache and in the new SACK data. This prevents us from
* David Miller [EMAIL PROTECTED] [070131 22:48]:
From: Baruch Even [EMAIL PROTECTED]
Date: Mon, 29 Jan 2007 09:13:39 +0200
Only advance the SACK fast-path pointer for the first block, the fast-path
assumes that only the first block advances next time so we should not move
the
skb
* David Miller [EMAIL PROTECTED] [070129 02:54]:
I just cut it at:
kernel.org:/pub/scm/linux/kernel/git/davem/net-2.6.21.git
Feel free to send me feature patches for consideration.
I'll probably toss things like Baruch's latest SACK fixes in
there so they can cook for a while
These patches are intended to fix the issues I've raised in a former
email in addition to the sorting code.
I still was not able to runtime test these patches, they were only
compile tested.
Baruch
-
To unsubscribe from this list: send the line unsubscribe netdev in
the body of a message to
Only advance the SACK fast-path pointer for the first block, the fast-path
assumes that only the first block advances next time so we should not move the
skb for the next sack blocks.
Signed-Off-By: Baruch Even [EMAIL PROTECTED]
---
I'm not sure about the fack_count part, this patch changes
and there
is little reason to couple the two checks.
Since the SACK receive cache doesn't need the data to be in host order we also
remove the ntohl in the checking loop.
Signed-Off-By: Baruch Even [EMAIL PROTECTED]
Index: 2.6-rc6/net/ipv4/tcp_input.c
When we check for SACK fast path make sure that we also have the same number of
SACK blocks in the cache and in the new SACK data. This prevents us from
mistakenly taking the cache data if the old data in the SACK cache is the same
as the data in the SACK block.
Signed-Off-By: Baruch Even [EMAIL
* David Miller [EMAIL PROTECTED] [070128 06:06]:
From: Baruch Even [EMAIL PROTECTED]
Date: Sat, 27 Jan 2007 18:49:49 +0200
Since the SACK receive cache doesn't need the data to be in host
order we also remove the ntohl in the checking loop.
...
- for (i = 0; i num_sacks; i
the wrong location. The fix is to
use a temporary buffer as a normal sort does.
Signed-Off-By: Baruch Even [EMAIL PROTECTED]
diff -X 2.6-rc6/Documentation/dontdiff -ur 2.6-rc6/net/ipv4/tcp_input.c
2.6-mod/net/ipv4/tcp_input.c
--- 2.6-rc6/net/ipv4/tcp_input.c2007-01-25 19:04:20.0 +0200
* Stephen Hemminger [EMAIL PROTECTED] [070125 20:47]:
On Thu, 25 Jan 2007 20:29:03 +0200
Baruch Even [EMAIL PROTECTED] wrote:
The sorting of SACK blocks actually munges them rather than sort, causing
the
TCP stack to ignore some SACK information and breaking the assumption of
ordered
In addition to the patch I've provided there are two more issues that I
believe are bugs in the SACK processing code. Since I'm not certain but
I don't have the time to look into them I'd like to raise them for other
folks to look at.
First issue is the checking of the applicability of the fast
* David Miller [EMAIL PROTECTED] [070126 01:55]:
From: Baruch Even [EMAIL PROTECTED]
Date: Thu, 25 Jan 2007 20:29:03 +0200
The sorting of SACK blocks actually munges them rather than sort, causing
the
TCP stack to ignore some SACK information and breaking the assumption of
ordered
Hello,
My network has several IPv6 addresses and they don't route between
themselves, due to the current source address selection it means that
many times the network is simply not operational since Linux will choose
an address for a different network than targeted for the connection.
I have
#ZHOU BIN# wrote:
From: Bin Zhou [EMAIL PROTECTED]
+ else if (sysctl_tcp_abc) {
+ /* RFC3465: Apppriate Byte Count
+ * increase once for each full cwnd acked.
+ * Veno has no idear about it so far, so we keep
+
Herbert Xu wrote:
Baruch Even [EMAIL PROTECTED] wrote:
+ case NETDEV_UNREGISTER:
case NETDEV_GOING_DOWN:
case NETDEV_DOWN:
/* Find every socket on this device and kill it. */
This brings up the question as to why we need to flush it on
NETDEV_GOING_DOWN
.
Signed-off-by: Baruch Even [EMAIL PROTECTED]
--
drivers/net/pppoe.c |1 +
1 file changed, 1 insertion(+)
Index: pppcd/drivers/net/pppoe.c
===
--- pppcd.orig/drivers/net/pppoe.c
+++ pppcd/drivers/net/pppoe.c
@@ -305,6 +305,7
Patrick McHardy wrote:
New version of the netlink_has_listeners() patch.
Changes:
- Fix missing listeners bitmap update when there was no delta in the
number of subscribed groups
- Use RCU to protect nltable listeners bitmap
-By: Baruch Even [EMAIL PROTECTED]
--
net/ipv4/tcp_htcp.c |1 -
1 file changed, 1 deletion(-)
Index: 2.6-git/net/ipv4/tcp_htcp.c
===
--- 2.6-git.orig/net/ipv4/tcp_htcp.c
+++ 2.6-git/net/ipv4/tcp_htcp.c
@@ -230,7 +230,6 @@ static
Hi,
I'm testing Linux 2.6.16-rc1-git4 on a 500Mbps line with 220ms rtt. I'm
getting a very strange cwnd history and was wondering if anyone noticed
it before and knows why it happens. A graph is attached and you can find
a resizable version at http://hamilton.ie/person/baruch/linet/
The changes
Hello,
I wanted to post an update about my work for SACK performance
improvements, I've updated the patches on our website and added a
technical report on the work so far.
It can be found at:
http://hamilton.ie/net/research.htm#patches
In summary: The Linux stack so far is unable to effectively
David S. Miller wrote:
From: Stephen Hemminger [EMAIL PROTECTED]
Date: Mon, 12 Dec 2005 12:03:22 -0800
-d32 = d32 / HZ;
-
/* (wmax-cwnd) * (srtt3 / HZ) / c * 2^(3*bictcp_HZ) */
-d64 = (d64 * dist * d32) (count+3-BICTCP_HZ);
-
-/* cubic root */
-d64
David S. Miller wrote:
From: Wael Noureddine [EMAIL PROTECTED]
Date: Sun, 21 Aug 2005 00:54:51 -0700
You could also tweak the LRO timeout in a similar fashion based upon
traffic patterns as well. In fact, extremely sophisticated things can
be done here to deal with the LRO timing as seen on
David S. Miller wrote:
From: Wael Noureddine [EMAIL PROTECTED]
Date: Sun, 21 Aug 2005 00:17:17 -0700
How do you intend on avoiding huge stretch ACKs?
The implication is that stretch ACKs are bad, which is wrong.
Oh yes, that's right, you're the same person who earlier in this
thread
47 matches
Mail list logo