I understand the goal.

Thinking in this direction , multiple queues make sense if there is enough 
processing power / multiple cores and memory. There is some over head involved 
to determine priority and send to the proper queue / routing.

I would say that having the ability to addition queues may bring more potential 
throughput in addition to your priority segregation. Would you think that we 
would need multiple queues for every one of the TPs?

Whistle thinking about another problem, I thought about secondary queues to 
help with this so that the additional computation wouldn’t affect the main 
function. Anytime I think about another queue, it requires more coordination 
and metadata that needs to be managed. May as well as variable number of queues.

There’s both a strength and a weakness to one queue . When adding another 
process it makes “complicated.”

Great thoughts!

Rahul Singh
Principal Architect | 1.202.390.9200 | rahul.si...@datastax.com
On Jan 16, 2019, 3:09 PM -0600, Carl Mueller 
<carl.muel...@smartthings.com.invalid>, wrote:
> additionally, a certain number of the threads in each stage could be
> restricted from serving the low-priority queues at all, say 8/32 or 16/32
> threads, to further ensure processing availability to the higher-priority
> tasks.
>
> On Wed, Jan 16, 2019 at 3:04 PM Carl Mueller <carl.muel...@smartthings.com>
> wrote:
>
> > At a theoretical level assuming it could be implemented with a magic wand,
> > would there be value to having a dual set of queues/threadpools at each of
> > the SEDA stages inside cassandra for a two-tier of priority? Such that you
> > could mark queries that return pages and pages of data as lower-priority
> > while smaller single-partition queries could be marked/defaulted as normal
> > priority, such that the lower-priority queues are only served if the normal
> > priority queues are empty?
> >
> > I suppose rough equivalency to this would be dual-datacenter with an
> > analysis cluster to serve the "slow" queries and a frontline one for the
> > higher priority stuff.
> >
> > However, it has come up several times that I'd like to run a one-off
> > maintenance job/query against production that could not be easily changed
> > (can't just throw up a DC), and while I can do app-level throttling with
> > some pain and sweat, it would seem something like this could do
> > lower-priority work in a somewhat-loaded cluster without impacting the main
> > workload.
> >
> >
> >

Reply via email to