Hi Dick -
glad you joined in the response, given your experience with spatial 
multiplexing of RF.
 
On Wednesday, August 31, 2022 3:04pm, [email protected] 
said:



> Date: Wed, 31 Aug 2022 10:52:15 -0700
> From: "Dick Roy" <[email protected]>
> To: "'Mike Puchol'" <[email protected]>
> Cc: <[email protected]>
> Subject: Re: [Starlink] Starlink "beam spread"
> Message-ID: <BCE2EFF2444C470C87D9D4C6D4E6E0A9@SRA6>
> Content-Type: text/plain; charset="iso-8859-1"
> On this particular one, the gateway beams are extremely narrow, around 1.5º
> to 2.5º. SpaceX is working on “mega-gateways” where 32 antennas
> will
> co-exist. They are also deploying a new gateway design with a larger
> antenna, and thus narrower beamwidth and more gain, allowing for a
> considerable reduction in TX power.
> 
> [RR] there is a much better way to do this! I sure hope starlink is
> considering it. Large antennas with narrow beam widths are a sledgehammer to
> kill a fly.
> :-)
 I'm pretty sure, from my examination of the dishy array (and the satellite 
arrays are likely the same) that at the front-end electronics level their use 
of 64 QAM (six bits of signal quantization in amplitude and phase) isn't gonna 
allow much signalling in multiple directions at once. That's a sampling theory 
issue, really.
 
There's a lot of handwaving in the discussion here, mostly focused on "beam 
width" rather than combined channel capacity. To me that's the "sledgehammer" 
you are referring to.
 
So that's one aspect. But the other aspect that I'm focused on is not in the 
arraycom space, it's in the question of how the highly variable (bursty) demand 
in both directions works out. The low-delay (small packets) at the signalling 
rate are actually smaller than the number of bits in transit over the 2 msec. 
path. So the packet level multiplexing here (with more dishy terminals than the 
4 240 Mb/sec channels afforded) is also an issue. We aren't talking about one 
CBR uplink per dishy. It's bitrate is highly variable, given Internet load.
 
Now RRUL from *one dishy* doesn't address this problem properly. You need to 
look at what happens when all the dishys (uplink and downlink) are wanting to 
saturate all up and down links. Then you add a new packet that needs low 
latency (whether it is ACK packet for TCP or a click from a FPS, or an audio 
frame in a teleconference, latency matters for all of these, and they aren't 
predictable in advance). How long will it be before an uplink slot is opened? 
If there is no load, it still requires a clear channel, so it requires waiting 
till one's turn. Assume that each dishy using the uplink gets one uplink slot 
in a slot time T. Then the delay till slot available can be short, if the slots 
are short, all dishys get a turn within 2 msec. or less. But if there is a load 
in both the up and down direction (and one can easily have such a load from 
file uploads or file downloads on each dishy, or a fairly big load from UHD 
video streams to each dishy being "buffered" into a playback buffer on a Smart 
TV), then latency due to queueing delay can soar.
 
If you don't allow all the dishys to use all the capacity of a satellite, the 
latency problem gets smaller. But then, of course, you are wasting a bit of 
capacity to preserve latency for low bitrate traffic.
 
However, high bitrate traffic also has latency requirements. It's not all 
"background".
 
I'm sure this is something that can be worked out by some algorithm that 
reaches toward the optimal state by dropping packets to signal TCP (or QUIC) 
that the transmitter needs to slow down to avoid building up queueing delay. 
The problem here, though, is that all the focus in this discussion is about 
"throughput" in a single link, and ignores the burstiness that is driving the 
multiplexing.
 
My point is that the discussion here has focused entirely on achievable bitrate 
maximum, with no load and no burstiness.
 
This isn't fixed by having more spatial control, to the extent that is 
possible. It *is* fixable by suitable Automatic Queue Management in the 
satellite path (which will be the "bottleneck link" between the home and the 
public Internet), plus suitable end-to-end protocol congestion management that 
responds to drops or ECN. (I don't know if the satellite looks at the packet 
hard enough and has enough state to set the ECN bit or that it has enough state 
to achieve bloom filter based fair queue dropping).
 
Dave Taht indicates he has some mysterious statistics from (single) dishy 
experiments. It's a problem that we cannot see into the Starlink black box to 
understand it fully.
 
But I'm CERTAIN that simplistic analysis about achievable channel bitrates in 
the abstract will not capture the issues, and that there will be LOTS of 
systemic problems that are not easily solved by just thinking about rates and 
beam width.
_______________________________________________
Starlink mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/starlink

Reply via email to