On Mon, 17 Apr 2023, Ulrich Speidel wrote:
On 17/04/2023 1:04 pm, David Lang wrote:
I think it is going to be fairly common, but the beauty of the idea is that
you don't have to risk much to try it. Long lived DNS answers (and
especially root servers and TLD servers) can trivially be mirrored to the
satellites, and you can experiment with caching to see what sort of hit
rate you get. Even if you don't cache a lot of the CDN type traffic, it
should still be a win to have the longer term stuff there.
Yes - but root servers and TLD servers also get cached a long time at the
clients. If each of your clients needs a root server and a few TLD lookups a
day, it's not a huge gain in terms of performance.
It is however a significant step up in terms of complexity. E.g., your
satellite-based DNS would have to point you at the TLD server that is
topologically closest to your Starlink gateway, or risk a potentially much
longer RTT for the lookup. So you'd need to maintain a list of TLD instances
on your satellite-based DNS and return the one that corresponds to what your
gateway-based DNS would get. Sure possible but more complex than a bog
standard DNS server in a fixed network.
really? I don't think the root and TLD servers do any geolocation, I think they
have fixed answers and rely on anycast addresses to find the closest one. Adding
those anycast addresses to the satellites would be transpoarent to all users
(assuming the satellites are operating at the IP layer, the old bent-pipe
approach did not, but once you have routing in space via the laser links...)
Practically speaking, we know from various sources that each Starlink
satellite provides - ballpark - a couple of dozen Gb/s in capacity, and
that active users on a "busy" satellite see a couple of dozen Mb/s of
that. "Busy" means most active users, and so we can conclude that the
number of users per satellite who use any site is at most around 1000. The
subset of users navigating to new sites among them is probably in the low
100's at best. If we're excluding new sites that aren't dynamic, we're
probably down to a couple of dozen new static sites being queried per
satellite pass. How many of these queries will be duplicates? Not a lot.
If we're including sites that are dynamic, we're still not getting a huge
probability of cache entry re-use.
I think that each user is typically going to use a lot less than the
'couple dozen MB' that is the limits that we see, so the number of users in
a cell would be much higher.
Yes, but these users aren't "active" in the sense that they will be firing
off DNS queries during the pass...
even users who are using the connection aren't going to be using dozens of MB of
bandwidth.
DNS data is not that large, getting enough storage into the satellites to
serve 90% of the non-dynamic data should not be a big deal. The dynamic
data expires fast enough (and can be detected as being dynamic and
expired faster in the satellite) that I'm not worried about serving data
from one side of the world to the other.
Yes, but the only advantage we'd get here is faster resolution for a very
small subset of DNS queries.
and while that's not as big a win as faster resolution of a larger set,
it's still a win.
Yes, but are we chasing diminishing margins here when there are much bigger
fish to fry?
possibly. it's worth discussing and DNS is an easy thing to start with compared
to most others.
David Lang
_______________________________________________
Starlink mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/starlink