> The valid combinations of $GOOS and $GOARCH are:
> freebsd 386
> freebsd amd64
> freebsd arm
> netbsd386
> netbsdamd64
> netbsdarm
Interesting. No arm64.
Here is the Rust page:
https://doc.rust-lang.org/nightly/rustc/platform-support.html
--
Hal Murray :
> More topics for this discussion:
>
> What platforms is the new environment supported on? See my reply to Sanjeev
> Gupta's message.
>
> As far as I can tell, we don't have a list of supported/tested platforms. Is
> there an official we page that describes what we support? (I'm
More topics for this discussion:
What platforms is the new environment supported on? See my reply to Sanjeev
Gupta's message.
As far as I can tell, we don't have a list of supported/tested platforms. Is
there an official we page that describes what we support? (I'm expecting
something like
> The issue I have is that each time you add code, or James refactors to fix
> bugs, there might be an implicit, untracked, bump in the rustc version
> required. Although Fedora may be able to keep up, debian-backports, or
> anything but the latest Ubuntu, and certainly Solaris, would remain
Hal, on the Rust v Go issue. I am speaking as a consumer of your code, who
builds from source.
There is talk on the LKML, about adding Rust as an allowed language. There
seems to be no objection in principle, work is proceeding. Much of my
knowledge of Rust is from those threads,
The issue I
Hal Murray :
> Thanks for taking the time to explain things to me.
Why would I not? It's *good* to have someone on the projects who's sufficiently
smart and stubborn to question my premises - please never stop doing that! I
know
more about the problems around getting to a memory-safe language
> No, I'm pushing Rust away - and determined to exit from C - because of
> reasons in the larger context. We need to get to a memory-safe language, we
> need decadal stability, and we need one with a reasonably low barrier to
> entry for new devs.
> Rust fails two of those tests. Go passes all
Hal Murray :
> That sounds like the right ballpark. Again, if I were working in this area I
> would be writing hack code to generate numbers. It's got to have a buffer
> for
> each item waiting in the channel. Does it do an alloc/free on each item or
> does it avoid that by saving the
> How do you know something is a leaf node? Lock claims can be arbitrarily
> fall down a call chain, after all.
I was thinking of looking at the code.
In the context of passing data to server threads, the code will be only a
page. There are 2 routines:
static: data
update_info (new):
Hal Murray :
> > You're thinking in C. This is a problem, because mutex-and-mailbox
> > architectures don't scale up well. Human brains get overwhelmed at about 4
> > mutexes; beyond that the proliferation of corner cases get to be more than
> > the meat can handle.
>
> I'm missing something.
>> You have a new toy. The only tool needed is a simple lock.
> Oh? What about the concurrent DNS thread we already have?
The only reason we have a DNS thread is because the current code only has one
thread. If we had a thread per "server" in the config file, they could do DNS
directly.
My
Richard Laager via devel :
> On 7/5/21 8:38 AM, Eric S. Raymond via devel wrote:
> > > There is a close-to-RFC to handle this area. "Interleave" is the
> > > buzzword. I
> > > haven't studied it. The idea is to grab a transmit time stamp, then
> > > tweak the
> > > protocol a bit so you can
Hal Murray :
>
> [timing of GC]
>
> > which shows their meassured STW pauses are bounded to about 95% by 600us and
> > typically less than 400us. This is consistent with other reports I've seen,
> > and that's why I took 600us as a worst case STW we're likely to see.
>
> I didn't see any
On Tue, Jul 6, 2021 at 1:40 PM Richard Laager via devel
wrote:
>
> On 7/5/21 8:38 AM, Eric S. Raymond via devel wrote:
> >> There is a close-to-RFC to handle this area. "Interleave" is the
> >> buzzword. I
> >> haven't studied it. The idea is to grab a transmit time stamp, then tweak
> >>
On Tue, Jul 06, 2021 at 12:18:26PM -0500, Richard Laager via devel wrote:
> On 7/5/21 8:38 AM, Eric S. Raymond via devel wrote:
> > > There is a close-to-RFC to handle this area. "Interleave" is the
> > > buzzword. I
> > > haven't studied it. The idea is to grab a transmit time stamp, then
>
On 7/5/21 8:38 AM, Eric S. Raymond via devel wrote:
There is a close-to-RFC to handle this area. "Interleave" is the buzzword. I
haven't studied it. The idea is to grab a transmit time stamp, then tweak the
protocol a bit so you can send that on the next packet.
Daniel discovered it was
Hal Murray :
> You have a new toy. The only tool needed is a simple lock.
Oh? What about the concurrent DNS thread we already have?
At this point I have two years of heavy experience in Go, so the toy
is no longer new. If a better upgrade from C existed, I would know
about it - Rust comes
[timing of GC]
> which shows their meassured STW pauses are bounded to about 95% by 600us and
> typically less than 400us. This is consistent with other reports I've seen,
> and that's why I took 600us as a worst case STW we're likely to see.
I didn't see any description about what was
> Right. But the way to get there is certainly *not* to try to design ahead
> while you're still thinking in a language like C where concurrent programming
> is difficult and error-prone.
This is not a hard problem. There is nothing tricky.
> Once you get used to being able to program in
Hal Murray :
>
> > We don't have a multithreaded server yet. Worst case we have two threads,
> > and only one can ever reach the critical region in question. Don't borrow
> > trouble! :-)
>
> I'm interested in building a server that will keep a gigabit link running at
> full speed. We can do
> We don't have a multithreaded server yet. Worst case we have two threads,
> and only one can ever reach the critical region in question. Don't borrow
> trouble! :-)
I'm interested in building a server that will keep a gigabit link running at
full speed. We can do that with multiple
Hal Murray :
> >> 1. packet tx happening right after tx timestamp for server response
>
> > A) Mitigate window 1 by turning off GC before it and back on after.
>
> Things get complicated. Consider a multi threaded server. If you have
> several busy server threads, can they keep the GC off
>> 1. packet tx happening right after tx timestamp for server response
> A) Mitigate window 1 by turning off GC before it and back on after.
Things get complicated. Consider a multi threaded server. If you have
several busy server threads, can they keep the GC off 100% of the time?
>> 1. packet tx happening right after tx timestamp for server response
> Yes, and that really should be handled in the kernel, maybe implemented via
> BPF.
Interesting idea. It might work for simple packes but is unlikely to be
practical for authenticated packets. If nothing else, you have
Dan Drown via devel :
> Let's talk a bit about what time critical sections are currently in the
> code. I think that will help drive the decision about the impact of garbage
> collection.
>
> I haven't looked at ntpsec's codebase lately, so some of this might be out
> of date. Please feel free
Hal Murray :
>
> Eric said:
> > Talk to me about what you think the effect of very occasional stop-the-world
> > pauses of 600 microseconds or less would be on sync accuracy. By "very
> > occasionally" let's say once every ten minutes or so, that being what I
> > think
> > is a *very*
Dan Drown via devel writes:
> Time critical code:
>
> 1. packet tx happening right after tx timestamp for server response
Yes, and that really should be handled in the kernel, maybe implemented
via BPF.
> 2. serial NMEA data timestamps
Arguably that should also (ideally) be the responsibility
Dan said:
> The time critical code can tolerate some level of delay (~hundreds of
> microseconds), as things like packet tx can be delayed for a multitude of
> kernel and hardware reasons. The good news is both of the time critical
> code paths are somewhat predictable and if we can
Quoting Achim Gratz via devel :
:
I tend to organize timer triggered code in a way that any computations
that take a long or indeterminate time happen after the time-critical
section that uses either very short computations or precomputed values
from the last period. That assumes that the
Eric said:
> Talk to me about what you think the effect of very occasional stop-the-world
> pauses of 600 microseconds or less would be on sync accuracy. By "very
> occasionally" let's say once every ten minutes or so, that being what I think
> is a *very* pessimistic estimate of GC frequency
Achim Gratz via devel :
> Eric S. Raymond via devel writes:
> > Talk to me about what you think the effect of very occasional
> > stop-the-world pauses of 600 microseconds or less would be on sync
> > accuracy. By "very occasionally" let's say once every ten minutes or
> > so, that being what I
Eric S. Raymond via devel writes:
> Talk to me about what you think the effect of very occasional
> stop-the-world pauses of 600 microseconds or less would be on sync
> accuracy. By "very occasionally" let's say once every ten minutes or
> so, that being what I think is a *very* pessimistic
Achim Gratz via devel :
> Matthew Selsky via devel writes:
> > On Tue, Jun 29, 2021 at 04:41:30PM -0400, Eric S. Raymond via devel wrote:
> >
> >> Well, first, the historical target for accuracy of WAN time service is
> >> more than an order of magnitude higher than 1ms. The worst-case jitter
>
Richard Laager via devel writes:
> On 6/29/21 3:41 PM, Eric S. Raymond via devel wrote:
>> Well, first, the historical target for accuracy of WAN time service is
>> more than an order of magnitude higher than 1ms.
>
> My two NTP servers are +- 0.1 ms and +- 0.2 ms as measured by the NTP
> pool
Matthew Selsky via devel writes:
> On Tue, Jun 29, 2021 at 04:41:30PM -0400, Eric S. Raymond via devel wrote:
>
>> Well, first, the historical target for accuracy of WAN time service is
>> more than an order of magnitude higher than 1ms. The worst-case jitter
>> that could add would be barely
Richard Laager via devel :
> Not particularly. Presumably it's just because of GPS PPS + good network?
Having a good local clock can explaiin it, yes.
--
http://www.catb.org/~esr/;>Eric S. Raymond
___
devel mailing list
On 6/29/21 8:52 PM, Eric S. Raymond wrote:
Richard Laager via devel :
On 6/29/21 3:41 PM, Eric S. Raymond via devel wrote:
Well, first, the historical target for accuracy of WAN time service is
more than an order of magnitude higher than 1ms.
My two NTP servers are +- 0.1 ms and +- 0.2 ms as
Eric said:
> That's quite exceptionally good. It's normally hard to get to within an
> order of magnitude of that on a LAN, let alone a WAN.
Many years ago, we squeezed room for another digit in ntpq'a printout. That
was the 4th fractional digit of delay, offset, and jitter where the units
This is output from my 17 year old server, i386, 32-bit, no TXCO, cheap
hardware, sitting in an airconditioned office where staff keep fiddling
with the thermostat (the large unit is 20 years old, so you are either cold
or very cold).
root@ntpmon:~# uptime
10:20:02 up 14 days, 9:33, 1 user,
Richard Laager via devel :
> On 6/29/21 3:41 PM, Eric S. Raymond via devel wrote:
> > Well, first, the historical target for accuracy of WAN time service is
> > more than an order of magnitude higher than 1ms.
>
> My two NTP servers are +- 0.1 ms and +- 0.2 ms as measured by the NTP pool
>
Matthew Selsky :
> Our target is < 1 us, even for WAN time service. We would want to
> keep/improve this accuracy target.
One *microsecond*? Has any version of NTP achieved that kind of accuracy?
I don't think we're there.
--
http://www.catb.org/~esr/;>Eric S. Raymond
On 6/29/21 3:41 PM, Eric S. Raymond via devel wrote:
Well, first, the historical target for accuracy of WAN time service is
more than an order of magnitude higher than 1ms.
My two NTP servers are +- 0.1 ms and +- 0.2 ms as measured by the NTP
pool monitoring system across the Internet.
--
On Tue, Jun 29, 2021 at 04:41:30PM -0400, Eric S. Raymond via devel wrote:
> Well, first, the historical target for accuracy of WAN time service is
> more than an order of magnitude higher than 1ms. The worst-case jitter
> that could add would be barely above the measurement-noise floor at
Sanjeev Gupta via devel :
> This is a follow on to Eric's email a few hours ago, I am keeping that
> thread clean.
>
> (The last 3GL I programmed in was Fortran, and not the 77 version. I can
> read bash scripts and C pseudo-code)
>
> The literature I can find speaks of Go GC being improved in
This is a follow on to Eric's email a few hours ago, I am keeping that
thread clean.
(The last 3GL I programmed in was Fortran, and not the 77 version. I can
read bash scripts and C pseudo-code)
The literature I can find speaks of Go GC being improved in 1.5, such that
the STW phase (the
45 matches
Mail list logo