On Sun, 14 May 2023, Ulrich Speidel wrote:
I just discovered that someone is manufacturing an adapter so you no longer
have
to cut the cable
https://www.amazon.com/YAOSHENG-Rectangular-Adapter-Connect-Injector/dp/B0BYJTHX4P
<https://www.amazon.com/YAOSHENG-Rectangular-Adapter-Connect-Injector/dp/B0BYJTHX4P>
I'll see whether I can get hold of one of these. Cutting a cable on a
university IT asset as an academic is not allowed here, except if it doesn't
meet electrical safety standards.
Alternatively, has anyone tried the standard Starlink Ethernet adapter with a
PoE injector instead of the WiFi box? The adapter above seems to be like the
Starlink one (which also inserts into the cable between Dishy and router).
that connects you a 2nd ethernet port on the router, not on the dishy
I just ordered one of those adapters, it will take a few weeks to arrive.
> Put another way: If you have a protocol (TCP) that is designed to
> reasonably
> expect that its current cwnd is OK to use for now is put into a situation
> where there are relatively frequent, huge and lasting step changes in
> available BDP within subsecond periods, are your underlying assumptions
> still
> valid?
I think that with interference from other APs, WIFI suffers at least as much
unpredictable changes to the available bandwidth.
Really? I'm thinking stuff like the sudden addition of packets from
potentially dozens of TCP flows with large cwnd's?
vs losing 90% of your available bandwidth to interference?? I think it's going
to be a similar problem
> I suspect they're handing over whole cells, not individual users, at a
time.
I would guess the same (remember, in spite of them having launched >4000
satellites, this is still the early days, with the network changing as more
launching)
We've seen that it seems that there is only one satellite serving any cell
one time.
But the reverse is almost certainly not true: Each satellite must serve
multiple cells.
true, but while the satellite over a given area will change, the usage in that
area isn't changing that much
But remember that the system does know how much usage there is in the
cell before they do the handoff. It's unknown if they do anything with
that, or
if they are just relaying based on geography. We also don't know what the
bandwidth to the ground stations is compared to the dishy.
Well, we do know for NZ, sort of, based on the licences Starlink has here.
what is the ground station bandwith?
And remember that for every cell that a satellite takes over, it's also
giving away one cell at the same time.
Yes, except that some cells may have no users in them and some of them have a
lot (think of a satellite flying into range of California from the Pacific,
dropping over-the-water cells and acquiring land-based ones).
I'm not saying that the problem is trivial, but just that it's not unique
What makes me suspicious here that it's not the usual bufferbloat problem is
this: With conventional bufferbloat and FIFOs, you'd expect standing queues,
right? With Starlink, we see the queues emptying relatively occasionally with
RTTs in the low 20 ms, and in some cases under 20 ms even. With large ping
packets (1500 bytes).
it's not directly a bufferbloat problem, bufferbloat is a side effect (At most)
we know that the avaialble starlink bandwidth is chopped into timeslots (sorry,
don't remember how many), and I could see the possibility of there being the
same number of timeslots down to the ground station as up from the dishies, and
if the bottleneck is at the uplink from the ground station, then things would
queue there.
As latency changes, figuring out if it's extra distance that must be traveled,
or buffering is hard. does the latency stay roughly the same until the next
satellite change? or does it taper off?
If it stays the same, I would suspect that you are actually hitting a different
ground station and there is a VPN backhaul to your egress point to the regular
Internet (which doesn't support mobile IP addresses) for that cycle. If it
tapers off, then I could buy bufferbloat that gets resolved as TCP backs off.
my main point in replying several messages ago was to point out other scenarios
where the load changes rapidly and/or the available bandwidth changes rapidly.
And you are correct that it is generally not handled well by common equipment.
I think that active queue management on the sending side of the bottleneck will
handle it fairly well. It doesn't have to do calculations based on what the
bandwidth is, it just needs to know what it has pending to go out.
David Lang
_______________________________________________
Starlink mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/starlink