Yes. Unshaped it is ~20mbit, the tests were ran with cake shaping at 80%.
On Sat, Jul 21, 2018, 2:20 PM Dave Taht wrote:
> hmm? you only have 15mbits down?
> On Sat, Jul 21, 2018 at 11:18 AM Georgios Amanakis
> wrote:
> >
> > On Sat, 2018-07-21 at 10:47 -0700, Dave Taht wrote:
> > > for
On Mon, 2018-07-23 at 19:36 -0700, Dave Taht wrote:
> George does your result mean you also have a crappy cablemodem?
>
Yes, I think so. It's a Linksys DPC3008 DOCSIS 3.0. Also, I cannot get
it to behave any differently with hping3 as Arie suggested.
> On Jul 21, 2018, at 6:09 PM, Dave Taht wrote:
>
> PS I also have two other issues going on. This is the first time I've
> been using irtt with a 20ms interval, and I regularly see single 50+ms
> spikes (in both ping and irtt) data and also see irtt stop
> transmitting.
irtt should keep
George does your result mean you also have a crappy cablemodem?
On Sat, Jul 21, 2018 at 10:20 AM Georgios Amanakis wrote:
>
> On Sat, 2018-07-21 at 09:09 -0700, Dave Taht wrote:
> >
> > 1) Can someone else on a cablemodem (even without the latest cake,
> > this happens to me on older cake and
I believe that cable modems all default to 192.168.100.1, this seems to be
backed by "Cable Modem Operations Support System Interface Specification",
CM-SP-CM-OSSIv3.1-I04-150611:
" • The CM MUST support 192.168.100.1, as the well-known diagnostic IP
address accessible only from the CMCI
> On 21 Jul, 2018, at 11:01 pm, Dave Taht wrote:
>
> The cmts buffer fills more rapidly, particularly in slow start, while
> presenting packets to the inbound shaper at 100mbit. cake starts
> signalling, late, trying to achieve but at that point the apparent
> RTTs are still growing rapidly
To summarize:
A) I think the magic 85% figure only applies at lower bandwidths.
B) We are at least partially in a pathological situation where
CMTS = 380ms of buffering, token bucket fifo at 100mbit
Cakebox: AQMing and trying to shape below 85mbit, gradually ramping up
the signalling once per
Dave Taht writes:
> On Sat, Jul 21, 2018 at 10:28 AM Georgios Amanakis
> wrote:
>>
>> The previous one was with:
>> net.ipv4.tcp_congestion_control=cubic
>>
>> I retried with:
>> net.ipv4.tcp_congestion_control=reno
>>
>> Georgios
>
> In the fast test this has no effect on the remote server's
hmm? you only have 15mbits down?
On Sat, Jul 21, 2018 at 11:18 AM Georgios Amanakis wrote:
>
> On Sat, 2018-07-21 at 10:47 -0700, Dave Taht wrote:
> > for reference can you do a download and capture against flent-newark,
> > while using the ping test?
> >
>
> New data, this is what I did:
>
> 1)
for reference can you do a download and capture against flent-newark,
while using the ping test?
On Sat, Jul 21, 2018 at 10:44 AM Georgios Amanakis wrote:
>
> On Sat, 2018-07-21 at 20:23 +0300, Jonathan Morton wrote:
> >
> > I'd like to see a tcptrace of what's going on here. A packet capture
>
On Sat, Jul 21, 2018 at 10:28 AM Georgios Amanakis wrote:
>
> The previous one was with:
> net.ipv4.tcp_congestion_control=cubic
>
> I retried with:
> net.ipv4.tcp_congestion_control=reno
>
> Georgios
In the fast test this has no effect on the remote server's tcp, it's
always going to be reno.
Yours is not as horrific as mine in either case.
Can you provide an unshaped result as well?
On Sat, Jul 21, 2018 at 10:24 AM Arie wrote:
>
> Two more data points. Shaped my connection to 250Mbit out of the advertised
> 250Mbit (my usual setting) and shaped to 200Mbit out of the 250Mbit. This
> On 21 Jul, 2018, at 8:20 pm, Georgios Amanakis wrote:
>
> I got the same result as you. This is using latest cake.
I'd like to see a tcptrace of what's going on here. A packet capture with
snaplen 100 should allow me to generate one.
- Jonathan Morton
13 matches
Mail list logo