> On Oct 30, 2014, at 8:23 AM, Jimmy Hess <[email protected]> wrote: > > On Wed, Oct 29, 2014 at 7:04 PM, Ben Sjoberg <[email protected]> wrote: > >> That 3Mb difference is probably just packet overhead + congestion > > Yes... however, that's actually an industry standard of implying > higher performance than reality, because end users don't care about > the datagram overhead which their applications do not see they just > want X megabits of real-world performance, and this industry would > perhaps be better off if we called a link that can deliver at best 17 > Megabits of Goodput reliably a "15 Megabit goodput +5 service" > instead of calling it a "20 Megabit service" > > Or at least appended a disclaimer *"Real-world best case download > performance: approximately 1.8 Megabytes per second" > > > Subtracting overhead and quoting that instead of raw link speeds. > But that's not the industry standard. I believe the industry standard > is to provide the numerically highest performance number as is > possible through best-case theoretical testing; let the end user > experience disappointment and explain the misunderstanding later. > > End users also more concerned about their individual download rate on > actual file transfers and not the total averaged aggregate > throughput of the network of 10 users or 10 streams downloading data > simultaneously, or characteristics transport protocols are > concerned about such as fairness.
Not it’s not. All the link speeds are products of standards, be it SDH/SONET, PDH, or various flavors of ethernet. They are objective numbers. What you are advocating, given that much of the overhead is per packet/frame overhead and will vary based on the application and packet size distribution, will create more confusion than what we have today. -dorian

