I think the problem is that it’s just extremely difficult to reason about.
I think that’s really the root of the concern, and he’s generally working
with code that’s likely close to hardware ideal - a thread count not much
more than the number of cores at the high end. Here you have to reason
about
hundreds to thousands of threads with a very large number of sources.

And then rate limiting is tough to stack on to that reasoning. It’s not
going to be nicely tied to the hardware capabilities and capacity - it’s a
static throttle on a very dynamic environment. Netflix actually has a very
interesting open source project on GitHub that tries to solve that problem
with some clever math and dynamic limits. Even that is a bandaid on a very
hard problem though, and it’s all targeted at requests, and there is plenty
of room for threads having liveness, deadlock, and starvation issues that
are not so neatly tied to requests.

And really, IMO if you solve good throttling at the point of incoming
requests
(dynamic limits reflective of current resource usage, capacity, traffic,
minimal client retry kick backs that occur when truly over load and not
around optimal max throughout, with wait queues that don’t sit on threads
or more than nominal resources)
prioritizing or holding back threads handling in progress threads becomes
not very appealing and much more likely to hamper peak/good throughout due
to
the locky nature of Java and the way Solr requests can depend on each
other. I think if you are in that situation, there’s such a risk of making
things worse, that it’s best to let every in flight request race it’s way
through as fast as it can while you just efficiently gate new request from
starting.

The whole situation is really one of the largest downsides to Java. Java
puts
you on a downhill road to a system that abhors approaching, say nothing
about maintaining, optimal max throughput. And so you generally have death
spiral limits or artificially low static limits with poor overload behavior
and human operator time reaction.  If there even is a win in that world
around thread priorities,  the risk reward calculation is just that good.

The only real good thing I’ve ever seen, other than a hand wavy mythical
promise of the always coming green threads, is a fully async system with
effectively built in back pressure.

I’m just tossing out prattle though, not arguing against thread priorities.
Some projects have tried them out here or there.  I think they commonly end
up removed and commonly don’t get credited with a noticiable impact, but
implementation, testing, and results always beats advice, best practices
and warnings.

On Sat, Jul 22, 2023 at 5:39 PM David Smiley <david.w.smi...@gmail.com>
wrote:

Thanks.  I could see thread priority customization being used well in
combination with rate limiting so as to mitigate a starvation risk.

>

Yeah, I met Brian Goetz and have his excellent book.

>

~ David

>
>

On Sat, Jul 22, 2023 at 3:20 AM Mark Miller <markrmil...@gmail.com> wrote:

>

> It’s a hint for the OS, so results can vary by platform. Not the end of
the
> world but not ideal.
>
> A scarier fact is that Brian Goetz, pretty big name in Java concurrency,
> recommends against in general, noting that it can lead to liveness /
> starvation issues.
>

>

Reply via email to