Hal Murray :
> If you pass in a buffer, there is no reason to allocate anything in the case
> of a server processing a request so this whole discussion is a wild goose
> chase.
It's a little more complicated than that, because I was describing the
lowest-level recvfrom() in the socket library.
>> What is the API for recvfrom()? Do you pass in a buffer, like in C, or
does
>> it return a newly allocated buffer?
> You pass in a buffer. In theory we could maintain a buffer ring. I'd want
> to see actual benchmarks showing frequent GCs before I'd believe it was
> necessary, though.
If
Hal Murray :
> That sounds like the right ballpark. Again, if I were working in this area I
> would be writing hack code to generate numbers. It's got to have a buffer
> for
> each item waiting in the channel. Does it do an alloc/free on each item or
> does it avoid that by saving the
> How do you know something is a leaf node? Lock claims can be arbitrarily
> fall down a call chain, after all.
I was thinking of looking at the code.
In the context of passing data to server threads, the code will be only a
page. There are 2 routines:
static: data
update_info (new):
Hal Murray :
>
> >> That doesn't make sense. Where does your "one second apart" come from?
> Why
> >> is "currently has 2 threads" interesting?
> > When do we poll at a less than one-secpmd interval? Most allocatopmns wo;l;
> > ber associated with making a packet fra,e for he send, thn dealing
>> That doesn't make sense. Where does your "one second apart" come from?
Why
>> is "currently has 2 threads" interesting?
> When do we poll at a less than one-secpmd interval? Most allocatopmns wo;l;
> ber associated with making a packet fra,e for he send, thn dealing with a
> response that
Hal Murray :
> > You're thinking in C. This is a problem, because mutex-and-mailbox
> > architectures don't scale up well. Human brains get overwhelmed at about 4
> > mutexes; beyond that the proliferation of corner cases get to be more than
> > the meat can handle.
>
> I'm missing something.
Hal Murray :
>
> > I don't know all those numbers yet. But: given that NTPsec only currently
> > has 2 threads and our allocations are typically occuring one second apart or
> > less per upstream or downstream, I can't even plausibly *imagine* a Raft
> > implementation having lower memory churn
>> You have a new toy. The only tool needed is a simple lock.
> Oh? What about the concurrent DNS thread we already have?
The only reason we have a DNS thread is because the current code only has one
thread. If we had a thread per "server" in the config file, they could do DNS
directly.
My