On 2 December 2013 03:45, Sepherosa Ziehau <sepher...@gmail.com> wrote:
> On Mon, Dec 2, 2013 at 1:02 PM, Adrian Chadd <adr...@freebsd.org> wrote:
>> Ok, so given this, how do you guarantee the UTHREAD stays on the given
>> CPU? You assume it stays on the CPU that the initial listen socket was
>> created on, right? If it's migrated to another CPU core then the
>> listen queue still stays in the original hash group that's in a netisr
>> on a different CPU?
> As I wrote in the above brief introduction, Dfly currently relies on the
> scheduler doing the proper thing (the scheduler does do a very good job
> during my tests).  I need to export certain kind of socket option to make
> that information available to user space programs.  Force UTHREAD binding in
> kernel is not helpful, given in reverse proxy application, things are
> different.  And even if that kind of binding information was exported to
> user space, user space program still would have to poll it periodically (in
> Dfly at least), since other programs binding to the same addr/port could
> come and go, which will cause reorganizing of the inp localgroup in the
> current Dfly implementation.

Right. I kinda gathered that. It's fine, I was conceptually thinking
of doing some thead pinning into this anyway.

How do you see this scaling on massively multi-core machines? Like 32,
48, 64, 128 cores? I had some vague handwav-y notion of maybe limiting
the concept of pcbgroup hash / netisr threads to a subset of CPUs, or
have them be able to float between sockets but only have 1 (or n,
maybe) per socket. Or just have a fixed, smaller pool. The idea then
is the scheduler would need to be told that a given userland
thread/process belongs to a given netisr thread, and to schedule them
on the same CPU when possible.

Anyway, thanks for doing this work. I only wish that you'd do it for
FreeBSD. :-)

freebsd-current@freebsd.org mailing list
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"

Reply via email to