----- On Jun 20, 2017, at 11:17 PM, Matt Wagner [email protected] wrote: > On Mon, Jun 19, 2017 at 7:31 PM, Daniel 'hackbyte' Mitzlaff < > [email protected]> wrote: > >> What the? >> >> 2017-06-19 23:28 GMT+02:00 Dan Geist <[email protected]>: >> > >> > I support an ecosystem of (a really big number of) clients that are all >> > intentionally rebooted every night at the same local time. There is >> indeed >> > a huge flood of time checks immediately following them since there is no >> > battery-backed "CMOS-type" clock function on the device (thank you >> > bean-counters). This is actually kind of nice since it's a really >> accurate >> > approximation of worst case scenario. >> > >> Are you really telling me, you guys are "just going the easy way", leaving >> all these devices on their defaults instead of setting up your damn very >> own ntpd acting at least as a stratum 3 proxy for your fscking LAN??? >> > > I don't see anywhere in that email where it's suggested that they are > querying > public servers. If anything, calling the behavior "kind of nice" as a load > test > would lead me to believe that the traffic is indeed against their own > servers. This is correct, Matt. We run ~15 stratum1 clocks geo-distributed and and ~25 stratum2 hosts on commodity hardware/vms (using aforementioned stratum1 hosts as a "pool"). Something on the order of 1.5 million clients using the S2 collection via both a group of ipv4 and ipv6 anycasts (for sntp clients) and a pool of real IPs (for ntpd clients). Lots of utility issues cutting power to customer homes, various construction damages causing havoc, etc. means that we see a lot of variability, but having them all reboot at the exact same time is a really nice load test for the infrastructure. > >> REALLY? >> >> Well, i suppose, people who just put a full quote in their answer seem to >> don't know it better? >> > > This bit comes across as a personal attack. Back in the day, anything more than 80x25 was rude. Times change :)
Dan -- Dan Geist dan(@)polter.net _______________________________________________ pool mailing list [email protected] http://lists.ntp.org/listinfo/pool
