In this particular case, I was testing to an eNB on the shelf across the
room. With the configured SF/SSF and channel size I could get 6-7mbps
up on speedtest.net, but I was held at 2.x mbps up on iPerf. I reduced
the size of the dedicated bearer last night, but haven't re-tested yet.
I'll double check as soon as I can.
------ Original Message ------
From: "Nathan Anderson" <[email protected]>
To: "[email protected]" <[email protected]>; "Adam Moffett"
<[email protected]>
Sent: 2/3/2017 2:08:28 AM
Subject: RE: [Telrad] QCI levels and latency
Well, wait a minute...how much upload speed do you see from the same
client if you remove the dedicated bearer? On a loaded eNB, with an
effectively-uncapped global profile (100x10), I sometimes struggle to
get 3Mbit/s on upload, so I'm not sure how definitive this test is.
Remember that I have no problem with downlink MBR being enforced, just
uplink. (If I target MT bandwidth-test or iperf on the downlink for
the dedicated bearer, it maxes out at 200-ish Kbit/s, as it should. If
a MT upload test or iperf upload test is done from the same client and
all packets have the correct DSCP value set, I am seeing 3-7 megs up,
which is exactly how much I can upload from that same client to that
same eNB with a 10Mbit/s UL AMBR anyway.) And setting any of the DSCP
parameters in the CPE would have absolutely no effect on which bearer
is chosen for downlink traffic anyway: that's strictly controlled by
setting the DSCP field in the IP header on packets before they hit the
EPC.
So at least in my case, I can quite definitively say that setting the
DSCP value on upload packets is having no effect. It does not seem as
though the UE is transmitting those packets on the dedicated bearer.
-- Nathan
--------------------------------------------------------------------------------
From:[email protected] <[email protected]> on behalf of
Adam Moffett <[email protected]>
Sent: Thursday, February 2, 2017 2:03 PM
To:[email protected]
Subject: Re: [Telrad] QCI levels and latency
When I had it set to 256k both ways for GBR and MBR, I would see
anywhere from 500-700kbps on iperf on the dedicated bearer.
So I created global service profile at 100m x 30m, and set the
dedicated bearer to 1m x1m GBR and 6m x 2m MBR.
Now I see big numbers in speedtest.net, but I only get 6.3 x 2.8 on
iPerf.
It must not police the traffic to that exact amount....otherwise I
think it must be working.
-Adam
------ Original Message ------
From: "Adam Moffett" <[email protected]>
To: "[email protected]" <[email protected]>
Sent: 2/2/2017 11:45:12 AM
Subject: Re: [Telrad] QCI levels and latency
My list too
I'm having the same confusion for the exact same reason.
------ Original Message ------
From: "Nathan Anderson" <[email protected]>
To: "'Adam Moffett'" <[email protected]>; "[email protected]"
<[email protected]>
Sent: 2/2/2017 3:55:24 AM
Subject: RE: [Telrad] QCI levels and latency
I used GBR 100k / MBR 256k, same as the example documentation on the
Telrad Zendesk KB article. For a single VoIP channel, should be
fine.
...however, I am afraid that either what I wrote earlier is wrong and
misleading, or it is right and should be working but isn't working.
I realized that having uplink traffic sent on the dedicated bearer
with MBR 256k should mean iperfs done from UEs would show max. 256k
upload. However I'm not seeing this. Now, maybe they thought
through this issue enough to exempt iperf traffic originated from the
UE itself from getting marked with the MGMT DSCP value (although I
have reason to believe this is not the case...it actually looks like
none of the traffic originating from the UE is getting marked!), but
even if that were the case, I tried having a MikroTik that was behind
a UE indiscrimiately mark all outgoing traffic with the DSCP I'm
using for my dedicated bearer, and did an upload test with MikroTik
bandwidth-test, and it is only getting limited by the UL AMBR, and
not by the dedicated bearer's UL MBR. (And I did verify through a
packet sniff that the bandwidth-test packets received on the other
side of our PDN router had the expected DSCP value set in the IP
header.)
So either something isn't working correctly here when it comes to
getting the UE to use the right bearer for this traffic, or I am
missing a step somewhere. So strange though because I could swear I
tested this earlier and found that it was working as expected.
Perhaps I only exhaustively tested the downlink stuff (which is
definitely working as expected: TCP sessions get capped at 256k if I
mark downward packets with the right DSCP value).
I guess this is just yet another question to throw on my ever-growing
pile of "things I need to ask Telrad about."
-- Nathan
From:[email protected] [mailto:[email protected]] On
Behalf Of Adam Moffett
Sent: Wednesday, February 01, 2017 8:07 AM
To:[email protected]
Subject: Re: [Telrad] QCI levels and latency
Ok thanks. Great info. How large did you make your dedicated
management bearer? I did 256k, but now I'm thinking it's not big
enough.
I still can't say anything authoritative about what would happen with
QCI 6 vs QCI 7, but presumably if the system couldn't fit a packet
into the delay budget it would have to drop it. We could make some
guesses about the ramifications of that, but I think you'll have to
test to know for sure. If I'm guessing, then I'd guess the only time
you can't hit the PDB is when there's congestion, so most of the time
you wouldn't see a difference. When there is congestion I think
you'd see less throughput on individual TCP connections....though
maybe you would see lower RTT as well, and total system throughput
might not be affected. I am literally making that up, so take it for
what it's worth LOL.
------ Original Message ------
From: "Nathan Anderson" <[email protected]>
To: "[email protected]" <[email protected]>; "Adam Moffett"
<[email protected]>
Sent: 1/31/2017 6:19:43 PM
Subject: RE: [Telrad] QCI levels and latency
Yes, that should be enough for specifically prioritizing management
traffic itself on the uplink (like accessing the UE web interface,
ping responses from the CPE, presumably also things like SNMP and
TR-069 responses, etc.).
If you need to prioritize certain customer upload traffic then you
will also need to either 1) check the DATA box (which is, I believe,
the default) on the DSCP page (7000) and then put the dedicated
bearer's DSCP value in there as well (which will prioritize ALL
traffic, and which likely isn't what you want unless this is a
special customer and the UL MBR and UL GBR specified on the default
bearer is sufficient for the use of that connection), or 2) UNcheck
the DATA checkbox (because if it is checked, it will override the
DSCP in the IP header of ALL user traffic, even if the value is set
to 0), which will allow IP packets originating from the user to
proceed through the UE with the DSCP mark untouched.
At this time, unfortunately the CPE8000 cannot have its management
uplink traffic prioritized without also unilaterally steamrolling
the DSCP mark on user-generated traffic (MGMT and DATA DSCP override
cannot be enabled and disabled independently!). I have brought this
to Telrad's attention, so hopefully that will be addressed in a
future firmware version.
-- Nathan
________________________________________
From: [email protected] <mailto:[email protected]>
<[email protected]> on behalf of Adam Moffett
<[email protected]>
Sent: Tuesday, January 31, 2017 1:13 PM
To: [email protected] <mailto:[email protected]>
Subject: Re: [Telrad] QCI levels and latency
I don't have an answer to the question on QCI 7 vs QCI 6.
I was curious how you configure the UE to classify upload traffic.
I
noticed the default config on both 7000 and 8000 is set to use DSCP
6
for management, so I made my dedicated bearer use DSCP 6. Is that
enough, or is there more to it?
I could probably read the manual and figure this out, but I was just
stabbing at it in my spare time :)
------ Original Message ------
From: "Nathan Anderson" <[email protected]>
To: "[email protected]" <[email protected]>
Sent: 1/31/2017 3:20:34 PM
Subject: [Telrad] QCI levels and latency
All,
We recently implemented iPCRF on our EPC to great effect. We added
a
QCI 1 profile that we apply to our dedicated bearer, and are
prioritizing our VoIP service using that. So that we can easily
see
and verify the effectiveness of this, we also started sending ICMP
over
the same dedicated bearer. Average latency and jitter to CPEs
dropped
like a rock right after we did that, so it is clearly working.
When our ENBs start to become moderately busy, we still notice that
RTT
for traffic on the default bearer can become both exceptionally
latent
and jittery. This is easy to see if we run a constant ping to a
CPE
and then stop prioritizing ICMP to that CPE in the middle of the
ping
test. Ping jitter goes up significantly almost immediately. When
we
prioritize ICMP, all we end up doing is masking that problem.
Unfortunately, release 6.6 only allows for one dedicated bearer, so
we
can't classify different types of traffic across multiple QCI
levels in
order to try to help deal with this better. But after looking at
the
various QCI levels that are defined in the LTE spec
(https://en.wikipedia.org/wiki/QoS_Class_Identifier), I am
wondering if
there isn't a short-term answer to this problem while we wait for
multiple dedicated bearer support. Specifically, I see that each
level
also has a defined "packet delay budget". QCI 6, the default pick
for
the default bearer, has a PDB of 300ms. What would happen if we
were
to, say, switch to using QCI 7, which has a PDB of 100ms, for our
default bearer? Would we actually see an overall improvement in
RTT?
And if so, would it be at the expense of anything/what would be the
downside(s)? (For example, would overall throughput end up taking
a
hit because it is trying to service UEs less efficiently so that it
can
make good on the latency budget?)
I'm curious to know if anyone has tried this.
Thanks,
--
Nathan Anderson
First Step Internet, LLC
[email protected]
_______________________________________________
Telrad mailing list
[email protected]
http://lists.wispa.org/mailman/listinfo/telrad
_______________________________________________
Telrad mailing list
[email protected]
http://lists.wispa.org/mailman/listinfo/telrad
_______________________________________________
Telrad mailing list
[email protected]
http://lists.wispa.org/mailman/listinfo/telrad