[ 
https://issues.apache.org/jira/browse/HDFS-17341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei Yang updated HDFS-17341:
----------------------------
    Description: 
Some service users today in namenode like ETL, metrics collection, ad-hoc users 
that are critical to run business critical job accounts for many traffic in 
namenode and shouldn't be throttled the same way as other individual users in 
FCQ.

There is feature in namenode to always prioritize some service users to not 
subject to FCQ scheduling. (Those users are always p0) but it is not perfect 
and it doesn't account for traffic surge from those users.

The idea is to allocate dedicated rpc queues for those service users with 
bounded queue capacity and allocate processing weight for those users. If queue 
is full, those users are expected to backoff and retry.

 

New configs:
{code:java}
"faircallqueue.reserved.users"; // list of service users that are assigned to 
dedicated queue
"faircallqueue.reserved.users.max"; // max number of service users allowed
"faircallqueue.reserved.users.capacities"; // custom queue capacities for each 
service user
"faircallqueue.multiplexer.reserved.weights"; // processing weights for each 
dedicated queue{code}
 

  was:
Some service users today in namenode like ETL, metrics collection, ad-hoc users 
that are critical to run business critical job accounts for many traffic in 
namenode and shouldn't be throttled the same way as other individual users in 
FCQ.

There is feature in namenode to whitelist some service users to not subject to 
FCQ scheduling. (Those users are always p0) but it is not perfect and it 
doesn't account for traffic surge from those users.

The idea is to allocate dedicated rpc queues for those service users with 
bounded queue capacity and allocate processing weight for those users. If queue 
is full, those users are expected to backoff and retry.

 

New configs:
{code:java}
"faircallqueue.reserved.users"; // list of service users that are assigned to 
dedicated queue
"faircallqueue.reserved.users.max"; // max number of service users allowed
"faircallqueue.reserved.users.capacities"; // custom queue capacities for each 
service user
"faircallqueue.multiplexer.reserved.weights"; // processing weights for each 
dedicated queue{code}
 


> Support dedicated user queues in Namenode FairCallQueue
> -------------------------------------------------------
>
>                 Key: HDFS-17341
>                 URL: https://issues.apache.org/jira/browse/HDFS-17341
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 2.10.0, 3.4.0
>            Reporter: Lei Yang
>            Priority: Major
>
> Some service users today in namenode like ETL, metrics collection, ad-hoc 
> users that are critical to run business critical job accounts for many 
> traffic in namenode and shouldn't be throttled the same way as other 
> individual users in FCQ.
> There is feature in namenode to always prioritize some service users to not 
> subject to FCQ scheduling. (Those users are always p0) but it is not perfect 
> and it doesn't account for traffic surge from those users.
> The idea is to allocate dedicated rpc queues for those service users with 
> bounded queue capacity and allocate processing weight for those users. If 
> queue is full, those users are expected to backoff and retry.
>  
> New configs:
> {code:java}
> "faircallqueue.reserved.users"; // list of service users that are assigned to 
> dedicated queue
> "faircallqueue.reserved.users.max"; // max number of service users allowed
> "faircallqueue.reserved.users.capacities"; // custom queue capacities for 
> each service user
> "faircallqueue.multiplexer.reserved.weights"; // processing weights for each 
> dedicated queue{code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to