On Wed, May 02, 2018 at 04:29:33PM -0400, Patrick Hemmer wrote:
> I think you're misunderstanding my design, as scoring wouldn't work like
> this at all. If you give the gold class a score of 1000 (where higher
> number means higher priority), then the only thing that would get
> processed before gold class would be another class that has a score >
> 1000. If nothing does, and a gold request comes in, it gets processed
> first, no matter how big the queue is.
> Think of scoring as instead having multiple queues. Each queue is only
> processed if the queue before it is empty.
(...)

OK you convinced me. Not on everything, but on the fact that we're trying
to address different points. My goal is to make it possible to improve
quality of service between good requests, and your goal is to improve the
classification between good, suspicious, and bad requests. Each of us see
how to expand a little bit our respective model to address part of the
other one (though less efficiently).

I agree that for your goal, multi-queue is better, but I still maintain
that for my goal, the time-based queue gives better control. The good
news is that the two are orthogonal and 100% compatible.

Basically the queueing system should be redesigned as a list of time-based
trees that are visited in order of priority (or traffic classes, we'll have
to find the proper wording to avoid confusion). If you think you can address
your needs with just a small set of different priorities, probably that we
can implement this with a small array (2-4 queues). If you think you need
more, then we'd rather think about building a composite position value in
the tree made of the priority at the top of the word and the service time
at the bottom of the word. This way, picking the first value will always
find the lowest priority value first. There's one subtlety related to
wrapping time there however, but it can be addressed in two lookups.

Please let me know if you'd be fine with designing and implementing
something like this.

Cheers,
Willy

Reply via email to