Ed Wildgoose wrote:

After a bit of messing about -
Patch would't apply and I couldn't see why. Then did it by hand and had to move vars to top of function to get it to compile.



Hmm, perhaps it got corrupted because of the change in line endings when I pasted it in on a windows machine? Piece of cake to apply manually. If I can get some PPoE settings then I will make a more generic patch and stick it on a website.


Can you paste the compile errors and tell me versions of gcc please? I can't see any probs with that code though? (They will be params passed in later anyway)

GCC 2.95.3 - Moved the var declarations back after the

while ((mtu>>cell_log) > 255)
                       cell_log++;
       }

and again get -

tc_core.c: In function `tc_calc_rtable':
tc_core.c:65: parse error before `int'
tc_core.c:77: `proto_overhead' undeclared (first use in this function)
tc_core.c:77: (Each undeclared identifier is reported only once
tc_core.c:77: for each function it appears in.)
tc_core.c:78: `encaps_data_sz' undeclared (first use in this function)
tc_core.c:78: `encaps_cell_sz' undeclared (first use in this function)
make[1]: *** [tc_core.o] Error 1

Works if I move them to the top of the function - maybe the formatting is messed up (I copy n pasted).


I set my uprate to 280kbit in TC = 286720 bit/s I am synced at 288000 - as you probably are, in UK, on what BT call 250/500 and isps call 256/512. I left a bit of slack just to let buffer empty if the odd packet extra slips through. FWIW maxing downlink (576000 for me) will probably mess up - you need to be slower or you don't get to build up queues and will often be using your isp's buffer.

I've been maxing uplink with bt for the last couple of hours and it's working fine -



Yes, excellent isn't it! I tested download rates (bearing in mind the difficulty of controlling those, and could get within a sliver of full bandwidth before the rate rises!

I still think that you need to throttle back on down rates - when really full you may find that new/bursty connections mean that you loose control of the queue. Of course having twice as much bandwidth always helps.




I see a two stage rise in ping times. First it stays on 30ms, then it rises to 60ms-90ms, then it queues like crazy. Interesting the kind of three step ramp up. I have a hunch that packets don't arrive smoothly and queuing occurs at the ISP end (once we get near the limit) even though the average rate is below the max rate...? (ie from time to time you start to see two packets ahead of you instead of just one)

I am only testing uploads - looking at some more pings, it does appear that they are not quite as random as they were - but apart from the odd double dequeue (in a way I think you can expect this with HTB using quanta/DRR rather than per byte), the max is right. I suspect this is nothing to do with ISP/Teleco end. I could actually do slightly better on my latency, but I am running at 100Hz - and I can tell with pings - they slowly rise then snap down by 10ms. This is nothing to do with TC, I normally run 500 but forgot to set it for this kernel.




100 packets transmitted, 100 packets received, 0% packet loss
round-trip min/avg/max/stddev = 15.586/44.518/67.984/13.367 ms

It's just as it should be for my MTU.



Hmm, what's your MTU? Those numbers look extremely low for 1500 byte packets (at least if you have a little downlink congestion as well?)



My downlink is clear, I am using 1478 MTU (so I don't waste half a cell per packet). Just did another hundred to my first hop -


100 packets transmitted, 100 packets received, 0% packet loss
round-trip min/avg/max/stddev = 16.661/43.673/70.841/14.634 ms

When I get some time later I'll start hitting it with lots of small packets aswell.

Been going for 7 hours now - I started a couple of games and 60 small pps didn't cause any problems.





I have a 1meg downstream with 256 upstream, and I turned on bittorrent to try and flog the connection a bit. Upstream was maxed out, but downstream was only half full. However, ping times are 20-110. I think they ought to be only 20-80 ish, and I'm trying to work out why there is some excess queuing (1500 ish mtu).

I worked out once bitrate latency is about 44ms for 1478. Your figures look like you may be sending packets in pairs - or have you already done the tweak which fixed this for me.


set HTB_HYSTERESIS 0 in net/sched/sch_htb.c


My QOS is based on the the
(excellent) script from:
http://digriz.org.uk/jdg-qos-script/

Yea it's nice - mine is a modified version.


Basically, HTB in both directions. RED on the incoming stream (works nicely). Outgoing classifies into 10 buckets, and ACK + ping are definitely going out ok in the top prio bucket, and the rest is going out in the prio 10 bucket.... But still these high pings... Hmm



I did find in the worst case I could do better than RED (not much though) and now I do per IP for bulk so Its harder to get the right settings with more than one RED, that may have different bandwidths at any given time. I also reduced the number of classes so each could have a higher rate.


Andy.


I would be interested to hear if anyone has a CBQ based setup and can tell me if that patch works for them? Or even whether it works on the incoming policer properly?


It looks as though this is an adequate way to tackle the problem. The alternative would be to hook into the enqueue side of the qdisk, calculate a new size value then and fix the code to refer to this value from then on. It would be quite invasive though because it modifies kernel headers. I would need someone who understands the scheduler in more detail to guide me as to whether it was neccessary

Ed W




_______________________________________________
LARTC mailing list / [EMAIL PROTECTED]
http://mailman.ds9a.nl/mailman/listinfo/lartc HOWTO: http://lartc.org/

Reply via email to