> Those of us here, like me and Dave Taht, who have measured the big elephants 
> in the room (esp. for Starlink) like "lag under load" and "fairness with 
> respect to competing traffic on the same <link>" probably were not consulted, 
> if the goal is "little burden on your available bandwidth".



I don’t have specifics for their test config, but most of the platforms would 
determine ‘little burden’ by looking for cross traffic (aka user demand on the 
connection) and if it is non-existent/low then running tests that can highly 
utilize the link capacity – whether for a working latency test or whatever.



> Frankly, I expect the results will be treated like other "quality metrics" - 
> J.D. Power comes to mind from consulting experience in the automotive 
> industry - and be cherry-picked to distort the results.



I dunno – I think the research & measurement community seems to be coalescing 
around certain types of working latency / responsiveness measures as being 
pretty good & predictive of real end user application QoE.



> By all means participate if you want, but I suspect that the "raw data" will 
> not be made available, and looking at the existing reports, it will be hard 
> to extract meaningful comparisons relevant to real user experience at the 
> test sites.



Not sure if the raw data will be available. Even if not, they may publish the 
parameters of the tests themselves.



JL
_______________________________________________
Starlink mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/starlink

Reply via email to