Re: [PATCH] Disable TSO for non standard qdiscs
Rick Jones wrote: then the qdisc could/should place a cap on the size of a 'TSO' based on the bitrate (and perhaps input as to how much time any one burst of data should be allowed to consume on the network) and pass that up the stack? right now you seem to be proposing what is effectively a cap of 1 MSS. I don't have any gig nics to test, so this is not a rhetorical question. How does tcp congestion control behave when a tc qdisc drops a big unsegmented TSO skb? Andy. -- To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Request to include ESFQ patch
Denys Fedoryshchenko wrote: Hi I took risk and installed ESFQ on my main backbone QoS. I found it highly useful, and very need in setup's where is more than 128 flows passing and especially where is nat available. I agree it will be good when it's in. Here is results with overloaded class for low-priority P2P traffic customers: *sfq was never meant for interactive traffic as such. If you really want to do QOS for them you would need to (somehow) classify interactive and give it prio over bulk. I know this may not be practical for your setup, but what ping times users get will vary depending how many other active users there are/queue length/how many tcps the user has open etc. Andy. -- To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 4/6] [IPROUTE2]: Overhead calculation is now done in the kernel
Jesper Dangaard Brouer wrote: commit 07a74a2613440fc1a68d0faa7235ed7027532d78 Author: Jesper Dangaard Brouer [EMAIL PROTECTED] Date: Tue Sep 11 16:59:58 2007 +0200 [IPROUTE2]: Overhead calculation is now done in the kernel. The only current user is HTB. HTB overhead argument is now passed on to the kernel (in the struct tc_ratespec). Also correct the data types. Thanks for getting this in. It would be cool if mpu/overhead could be set per class 255 and they would affect the way htb shares bandwidth. I could be wrong but it doesn't look like this will change current behavior.Perhaps just allowing mpu/overhead 255 for now, so that htb sharing could be fixed up in the future? The use would be for ingress shaping, you could set a big mpu for an interactive class and it would cause bulk classes to get way less bandwidth than otherwise, so you wouldn't permanently have to sacrifice so much bandwidth on a slow link for latency - just when you needed to. It could also, with the aid of netfilter connbytes, be used to preempt the remote buffer filling when new bulk flows start. One more thing, IIRC Devik acked your/Russels patch to change the HYSTERESIS define to 0 - any chance of resubmitting? Thanks Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC NET_SCHED 00/02]: Flexible SFQ flow classification
Patrick McHardy wrote: One good thing about ESFQ is the more flexible flow classification, but I don't like the concept of having a set of selectable hash functions very much. These patches change SFQ to allow attaching external classifiers and add a new flow classifier that allows to classify flows based on an arbitary combination of pre-defined keys. Its probably not the fastest classifier when used with multiple keys, but frankly, I don't think speed is very important in most situations where the current SFQ implementation is used. It currently does not support perturbation, I didn't want to move this into the classifier, so I need to think about a way to handle it within SFQ. Cool, but isn't this going to show the same collision problems that the pre jhash esfq saw? Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [RFC NET_SCHED 00/02]: Flexible SFQ flow classification
Patrick McHardy wrote: My classifier uses jhash, Ahh that's OK - I thought it still used the old sfq hash, which collided alot with a low number of consecutive addresses. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RFC: remove NET_CLS_POLICE?
Adrian Bunk wrote: Considering that NET_CLS_POLICE has been marked as obsolete for more than one year, would a patch to remove it be acceptable? cu Adrian People still request the 2.4 policer on LARTC as unlike its replacement it hooks after prerouting/denat so they can police on local addresses. There is no other way apart from IMQ, though ISTR mention of a meta match one day (for ifb). Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Can't turn off CONFIG_NET_ESTIMATOR on 2.6.17.7
I Recently built a 2.6.17.7 and wanted to turn off CONFIG_NET_ESTIMATOR but can't using menuconfig. Is it on by default now, or is it a config issue? I wanted it off to play with chains of policers and unless I misunderstand it uses Hz, and is inaccurate when Hz=250 with its' minimum time of 1/4 sec - which is too high for what I wanted anyway. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/2] NET: Accurate packet scheduling for ATM/ADSL
Russell Stuart wrote: On Tue, 2006-07-18 at 22:46 +0100, Andy Furniss wrote: FWIW I think it may be possible to do it Patricks' way, as if I read it properly he will end up with the ATM cell train length which gets shifted by cell_log and looked up as before. The ATM length will be in steps of 53 so with cell_log 3 or 4 I think there will be no collisions - so special rate tables for ATM can still be made perfect. Patrick is proposing that the packet lengths be sent to the kernel in a similar way to how transmission times (ie RTAB) is sent now. I agree that is how things should be done - but it doesn't have much to do with the ATM patch, other than he has allowed for ATM in the way he does the calculation in the kernel [1]. In particular: - As it stands, it doesn't help the qdiscs that use RTAB. So unless he proposes to remove RTAB entirely the ATM patch as it will still have to go in. Hmm - I was just looking at the kernel changes to htb. The only difference is the len - I am blindly assuming that it does/will return the link lengths properly for atm. So for atm, qdisc_tx_len(skb) will always return lengths that are multiples of 53. If nothing else were done we would suffer innacuarcy from the cell_log just like eth. But no other kernel hack would be needed to do it perfectly - rather like we (who patch for atm already) just fill the tc generated rate table with what we like, that would be an option. - A bit of effort was put into making this current ATM patch both backwards and forwards compatible. Patricks patch would work with newer kernels, obviously. Older kernels, and in particular the kernel that Debian is Etch is likely to distribute would miss out. If Patrick did intend remove RTAB entirely then he needs to add a fair bit more into his patch. Since RTAB is just STAB scaled, its certainly possible. The kernel will have to do a shift and a division for each packet, which I assume is permissible. I guess that is for others to decide :-) I think Patrick has a point about sfq/htb drr, Like you I guess, I thought that alot of extra per packet calculations would have got an instant NO. As you say, I think mpu should be added aswell - so eth/other can benefit. Not really. The MPU is reflected in the STAB table, just as it is for RTAB. OK, I was thinking of what Jamal said about helping others, so everything TC should be capable of accepting mpu and overhead with these patches - or is more work needed? It will be good to be able to say tc ... police rate 500kbit mpu 60 overhead 24 ... for eth. (Assuming eth mpu/overhead are really 46/38 - p in mpu is payload AIUI so 60 and 24 come from allowing for skb-len being IP+14) or for ATM + pppoa something like tc ... police rate 500kbit overhead 10 atm ... In the case of eth someone already added mpu/overhead for HTB and it doesn't need any extra per packet calcs. I guess this way it would. One other point - the optimisation Patrick proposes for STAB (over RTAB) was to make the number of entries variable. This seems like a good idea. However there is no such thing as a free lunch, and if you did indeed reduce the number of entries to 16 for Ethernet (as I think Patrick suggested), then each entry would cover 1500/16 = 93 different packet lengths. Ie, entry 0 would cover packet lengths 0..93, entry 1 94..186, and so on. A single entry can't be right for all those packet lengths, so again we are back to a average 30% error for typical VOIP length packets. I agree less accuracy will not be nice. But as an option it could be the only way you can do 1/10Gig + jumbo frames. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [IPROUTE2]: update documentation on mirred and IFB
jamal wrote: About two more or so to complete these.. cheers, jamal +tc qdisc add dev lo eth0 ? Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/2] NET: Accurate packet scheduling for ATM/ADSL
Russell Stuart wrote: On Sat, 2006-06-24 at 10:13 -0400, jamal wrote: And yes, I was arguing that the tc scheme you describe would not be so bad either if the cost of making a generic change is expensive. snip Patrick seems to have a simple way to compensate generically for link layer fragmentation, so i will not argue the practically; hopefully that settles it? ;- Things seem to have died down. Patrick's patch seemed unrelated to ATM to me. I did put up another suggestion, but I don't think anybody was too impressed with the idea. So that leave the current ATM patch as the only one we have on the table that addresses the ATM issue. FWIW I think it may be possible to do it Patricks' way, as if I read it properly he will end up with the ATM cell train length which gets shifted by cell_log and looked up as before. The ATM length will be in steps of 53 so with cell_log 3 or 4 I think there will be no collisions - so special rate tables for ATM can still be made perfect. As you say, I think mpu should be added aswell - so eth/other can benefit. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PATCH 0/2] NET: Accurate packet scheduling for ATM/ADSL
jamal wrote: I have taken linux-kernel off the list. Russell's site is inaccessible to me (I actually think this is related to some DNS issues i may be having) and your masters is too long to spend 2 minutes and glean it; so heres a question or two for you: - Have you tried to do a long-lived session such as a large FTP and seen how far off the deviation was? That would provide some interesting data point. - To be a devil's advocate (and not claim there is no issue), where do you draw the line with overhead? Me and many others have run a smilar hack for years, there is also a userspace project still alive which does the same. The difference is that without it I would need to sacrifice almost half my 288kbit atm/dsl showtime bandwidth to be sure of control. With the modification I can run at 286kbit / 288 and know I will never have jitter worse than the bitrate latency of a mtu packet. The 286 figure was choses to allow a full buffer to drain/ allow for timer innaccuracy etc. On a p200 with tsc, 2.6.12 it's never gone over for me - though talking of timers I notice on my desktop 2.6.16 I gain 2 minutes a day now. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Controlling TCP window size
I've been doing some testing of my new wan connection and noticed that when I specify a window with ip route it still changes after a while. Using 2.6.16.11 and latest iproute2. ip ro del default ip ro add default via 192.168.0.1 window 28000 ip ro ls 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.3 default via 192.168.0.1 dev eth0 window 28000 timestamps and window scaling are off using reno. Downloading a 5 meg file I advertise the correct window - the whole multiple of my mss below 28000 = 27322. After about 1300 lines of the dump the window creeps back up to 32767. 001959 IP 81.31.115.186.80 192.168.0.3.42305: . 1051979:1053417(1438) ack 123 win 57520 11 IP 192.168.0.3.42305 81.31.115.186.80: . ack 1053417 win 27322 002030 IP 81.31.115.186.80 192.168.0.3.42305: . 1053417:1054855(1438) ack 123 win 57520 001962 IP 81.31.115.186.80 192.168.0.3.42305: . 1054855:1056293(1438) ack 123 win 57520 11 IP 192.168.0.3.42305 81.31.115.186.80: . ack 1056293 win 27322 003020 IP 81.31.115.186.80 192.168.0.3.42305: . 1056293:1057731(1438) ack 123 win 57520 001968 IP 81.31.115.186.80 192.168.0.3.42305: . 1057731:1059169(1438) ack 123 win 57520 12 IP 192.168.0.3.42305 81.31.115.186.80: . ack 1059169 win 30198 002024 IP 81.31.115.186.80 192.168.0.3.42305: . 1059169:1060607(1438) ack 123 win 57520 12 IP 192.168.0.3.42305 81.31.115.186.80: . ack 1060607 win 32767 001980 IP 81.31.115.186.80 192.168.0.3.42305: . 1060607:1062045(1438) ack 123 win 57520 11 IP 192.168.0.3.42305 81.31.115.186.80: . ack 1062045 win 32767 Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Window shrinking (was Linux v2.6.16-rc6)
Roberto Nibali wrote: I had the distinct pleasure of partly get involved with debugging network stalls related to Linux clients (2.6.x kernel) and a Packeteer. Dare I suggest that it could be something as trivial as it looks like window scaling defaults to off on SunOS 2.5.1 and it's on on Linux - maybe packeteer can't handle it properly. Would be an easy test to turn it off on the Suse boxes. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Window shrinking (was Linux v2.6.16-rc6)
Mark Butler wrote: There is no problem manipulating the TCP window per se. The problem is advertising a window and then shrinking it faster than it is naturally reduced by incoming data, essentially granting credit to transmit x bytes, and then revoking that credit. The net result is the peer transmits data the advertiser said it was going to accept, and then the advertiser drops it on the floor. RFC793 only has a SHOULD NOT for this practice, but it is universally condemned nonetheless. - Mark B. Thanks Mark, I guess packeteer closes window down properly, I thought Dave's reply meant that doing that was Treason. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Window shrinking (was Linux v2.6.16-rc6)
Mark Butler wrote: Andy Furniss wrote: Mark Butler wrote: There is no problem manipulating the TCP window per se. The problem is advertising a window and then shrinking it faster than it is naturally reduced by incoming data, essentially granting credit to transmit x bytes, and then revoking that credit. The net result is the peer transmits data the advertiser said it was going to accept, and then the advertiser drops it on the floor. RFC793 only has a SHOULD NOT for this practice, but it is universally condemned nonetheless. Thanks Mark, I guess packeteer closes window down properly, I thought Dave's reply meant that doing that was Treason. Packeteer is almost certainly being cavalier about the way it reduces windows. It could be a serious problem, depending on the way it treats traffic on the return path. The treason thing is a joke. It is like a bank extending you a credit line one day, and revoking it the next. I don't use or know of anyone who uses Packeteer - or have you tested? My post was just a suggestion because I think that it uses adv window manipulation. I misunderstood DaveMs reply - it is illegally advertising a smaller window that it previously did. to mean that it was illegal to close down a window at all - you cleared that up - ie it is legal if you close it by = the amount of data that has just been acked. I assume this won't cause the Treason messaage so don't really understand why it is cavalier - or do you just mean the whole idea of window manipulation to shape may be dodgey but legal? Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: Linux v2.6.16-rc6
David S. Miller wrote: From: Michal Piotrowski [EMAIL PROTECTED] Date: Sun, 12 Mar 2006 02:51:40 +0100 I have noticed this warnings TCP: Treason uncloaked! Peer 82.113.55.2:11759/50967 shrinks window 148470938:148470943. Repaired. TCP: Treason uncloaked! Peer 82.113.55.2:11759/50967 shrinks window 148470938:148470943. Repaired. TCP: Treason uncloaked! Peer 82.113.55.2:11759/59768 shrinks window 1124211698:1124211703. Repaired. TCP: Treason uncloaked! Peer 82.113.55.2:11759/59768 shrinks window 1124211698:1124211703. Repaired. It maybe problem with ktorrent. It is a problem with the remote TCP implementation, it is illegally advertising a smaller window that it previously did. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Packeteer manipulates window for shaping. I probably misread/read wrong RFC on this but I thought it didn't break any MUST NOTs. I assume Linux + SFQ reordering packets during window growth would not trigger it. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PKT_SCHED]: Change default clock source to gettimeofday
Patrick McHardy wrote: tc qdisc add dev ppp0 handle 1:0 root htb tc class add dev ppp0 classid 1:1 htb rate 220kbit tc filter add dev ppp0 protocol ip u32 match u32 0 0 classid 1:1 gives a 3 pkt queue I think each class gets 3 if there are more classes That is because you use it on a ppp device and the default txqueuelen is 3. I guess it would make sense to increase the queue len of the default queues with HTB (and HFSC), I'm just not sure how what value would be appropriate. txqueuelen * 10? That would be quite a lot on ethernet devices. Ideally it should depend on the fraction of the reserved bandwidth .. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html 3pkt being the default is the point - I guess no one complains because there is normally another huge buffer behind it. VLANs IIRC have 0 as default and other tunnel type interfaces maybe - I haven't used them. If HTB bothers to fix these then I think it should just fix them to something that actually works - maybe 16 or 32 like IMQ/IFB. I agree it would be nice to work on rates. It would be nice if htb/hfsc had a variable master rate and the ability to have a fixed qlen independant of No. of classes :-) Refering to the above has anyone ever read anything about ingress shaping on a variable speed link? My dsl is currently 2mbit - 600kbit due to exchange contention. With reference to hysteresis 0 - could one of you guys ask Devik (thinking that it may get read and replied to quicker than if I do it). Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PKT_SCHED]: Change default clock source to gettimeofday
Patrick McHardy wrote: Andy Furniss wrote: What do you think about making HTB hysteresis 0 and possibly set HZ to 1000 when HTB is selected? I'm not qualified to judge about HTB hysteresis, but we can't change HZ, most people just enable everything which would mean basically everybody would be using HZ=1000 again. Fair enough - maybe a note about HZ and hysteresis in HTB help? By default now I think jitter on htb is 8ms on 100mbit it's 1ms with the tweaks (depending on there being 1 class). Irrespective of Hz, at low speed - say 128kbit the default setting of HYSTERSIS 1 means that bulk packets are sent in pairs unneccerily adding 90ms to jitter. HTB default qlen is that of the device or 3 if it is less. Maybe it should be increased 30? 3 is pretty useless for thoughput, but means that people who don't specify/get default lengths on sub queues get a half working setup. That only affects the direct queue for unclassified packets, doesn't it? tc qdisc add dev ppp0 handle 1:0 root htb tc class add dev ppp0 classid 1:1 htb rate 220kbit tc filter add dev ppp0 protocol ip u32 match u32 0 0 classid 1:1 gives a 3 pkt queue I think each class gets 3 if there are more classes Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: [PKT_SCHED]: Change default clock source to gettimeofday
Patrick McHardy wrote: Sorry for the repost Dave, I accidentally used the @oss.sgi.com address in my previous posting. - It seems to be a common mistake to use jiffies as clocksource, which gives very bad results in most cases. This patch changes the default to gettimeofday. What do you think about making HTB hysteresis 0 and possibly set HZ to 1000 when HTB is selected? By default now I think jitter on htb is 8ms on 100mbit it's 1ms with the tweaks (depending on there being 1 class). Irrespective of Hz, at low speed - say 128kbit the default setting of HYSTERSIS 1 means that bulk packets are sent in pairs unneccerily adding 90ms to jitter. HTB default qlen is that of the device or 3 if it is less. Maybe it should be increased 30? 3 is pretty useless for thoughput, but means that people who don't specify/get default lengths on sub queues get a half working setup. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: PATCH: Dummy as IMQ replacement
Andy Furniss wrote: I'll get everything upto date and try the new device tomorrow. Just tried the example script with ifb on 2.6.15 and it works OK, will try more complicated tests iptables soon. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: PATCH: Dummy as IMQ replacement
jamal wrote: On Fri, 2005-30-12 at 09:30 -0500, jamal wrote: Ok, attached is what i tried - the script is not very realistic but demoes that shaping will happen in the real device. Ahh yes - I should have tried with the default class rather than classifying. This behavior is fine and works OK with the old dummy, I'll get everything upto date and try the new device tomorrow. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: PATCH: Dummy as IMQ replacement
jamal wrote: On Thu, 2005-29-12 at 17:02 +, Andy Furniss wrote: Could you make it so that you can double queue the traffic - I couldn't do this with dummy/htb (but didn't try that hard). I've read of imq being used like this to enforce linkwide shaping policy above per user shaping. Another reason would be if you needed to include the redirected traffic in different shaping on the nic. I didnt quiet understand the feature. Could you give me an illustration? I mean it would be handy to be able to send a packet to dummy to be shaped then when that packet returns to then classify/shape again on the real interface. When I tried with an htb on eth0 and a redirect action to dummy the packets sent to dummy could not be shaped by the htb on eth0 - they go through as direct packets. It may be possible as is - but I couldn't get it to work. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: RED qdisc not working...
Daniel J Blueman wrote: Has anyone been able to get the RED (random early detection) qdisc working lately? I can't get anything going through it to be dropped or marked; the 'marked', 'early', 'pdrop' and 'other' fields remain at 0 [1]. In my example script [2], I get the 3072Kbits/s transfer into eth0, which you'd expect if the RED qdisc wasn't there. I have tried with a recent 2.6.12 debian kernel and stock 2.6.14 on x86_64 debian. I built new iproute and iptables packages from latest clean upstream sources, but to no avail. Any ideas? Please CC me on replies, as I am not subscribed. Dan --- [1] # tc -s qdisc show dev eth0 qdisc htb 1: r2q 10 default 10 direct_packets_stat 0 Sent 53985530 bytes 36757 pkts (dropped 0, overlimits 45125) qdisc red 10: parent 1:10 limit 512Kb min 64Kb max 128Kb Sent 53985530 bytes 36757 pkts (dropped 0, overlimits 0) marked 0 early 0 pdrop 0 other 0 --- [2] tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 1: htb default 10 tc class add dev eth0 parent 1: classid 1:1 htb rate 4096kbit ceil 4096kbit tc class add dev eth0 parent 1:1 classid 1:10 htb rate 3072kbit ceil 3072kbit tc qdisc add dev eth0 parent 1:10 handle 10: red \ limit 4096kbit min 512kbit max 1024kbit avpkt 1000 \ burst 100 probability 0.02 bandwidth 1024kbit ___ Daniel J Blueman You need to test with several tcp connections, one will not have a big enough rwin to fill the queue enough to reach the buffer thresholds - which for clarity I would specify in kb not kbit. Andy. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: ping behaviour
Andy Furniss wrote: Hmm - I did some testing on old kernels, though I know I sorted it to 2ms I haven't got that kernel anymore and setting a vanilla 2.4.26 to hz 500 made the forwarded traffic cycle with 5ms not 2ms, local doesn't cycle but isn't constant and gives 5ms varience. Running the same kernel @ 1000 Hz now gives a 10 ms step so as you said I shall forget about Hz :-) Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html
Re: ping behaviour
Andy Furniss wrote: You are comparing two completely different -systems-. Please stop being focused on HZ, ok ? HZ does not rule the priority of the in-kernel tasks. OK - It was just that changing HZ sorted it on 2.4. Hmm - I did some testing on old kernels, though I know I sorted it to 2ms I haven't got that kernel anymore and setting a vanilla 2.4.26 to hz 500 made the forwarded traffic cycle with 5ms not 2ms, local doesn't cycle but isn't constant and gives 5ms varience. I'll have to revisit that - thinking about it that could be because the test pings from the box with the modem stayed in sync (and I got +0 to +10ms steady depending on luck) and the timer on the LAN PC drifted and caused the pattern like I posted. I also still have two non vanilla - patched with a big patch which I used for the QOS stuff but also contained preempt and a different Hz patch. Being always low on disk space I have deleted the trees and can't see the .configs - so don't know what Hz they use, I'll have to see now I've thought of a way to test that. The answer to the behavior of local traffic on these - 2.4.24 is always +10ms, 2.4.21 is stable whatever the first ping is the rest will be, the first ping time being between baseline +0ms to +10ms. Forwarded cycles with the 10ms snap up/down - but this is only from Linux, I tried pings from win98 and I get random +0 to 10ms. Andy. - To unsubscribe from this list: send the line unsubscribe netdev in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html