[
https://issues.apache.org/jira/browse/HIVE-17481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16439446#comment-16439446
]
Thai Bui commented on HIVE-17481:
---------------------------------
[~prasanth_j] Thanks for the suggestion, I can see the guaranteed allocation
now.
{noformat}
2018-04-16T05:13:31,510 ERROR [Workload management master]
tez.GuaranteedTasksAllocator: No cluster information available to allocate; no
guaranteed tasks will be used
2018-04-16T05:16:14,843 INFO [Workload management master]
tez.GuaranteedTasksAllocator: Updating e09a173d-bf0d-4479-9592-be1707b33663
with 6 guaranteed tasks
2018-04-16T05:23:14,291 INFO [Workload management master]
tez.GuaranteedTasksAllocator: Updating c901f589-2a88-469d-9438-115d962d4ebe
with 6 guaranteed tasks
{noformat}
My current test is exactly as you just described. 2 queues with 50/50
allocation, 1 default, 1 slow with triggers to move from default to slow. I can
see that it works sometimes, but not all time. The supposed to be smaller query
still takes a long time and the bigger query still hogs up most of the
resources. Is preemption not working? (I have it enabled in LLAP).
First I executed a small query to get a time base line, about 2-5s for the
small query to finish.
{noformat}
2018-04-16T05:16:16,247 INFO [Thread-93] SessionState: Status: Running
(Executing on YARN cluster with App id application_1523646189134_0275)
2018-04-16T05:16:16,449 INFO [Thread-93]
monitoring.RenderStrategy$LogToFileFunction: Map 1: -/- Reducer 2: 0/12
Reducer 3: 0/1
2018-04-16T05:16:17,462 INFO [Thread-93] counters.Limits: Counter limits
initialized with parameters: GROUP_NAME_MAX=256, MAX_GROUPS=3000,
COUNTER_NAME_MAX=64, MAX_COUNTERS=10000
2018-04-16T05:16:17,466 INFO [Thread-93]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 0(+1)/1 Reducer 2: 0/12
Reducer 3: 0/1
2018-04-16T05:16:18,897 INFO [StateChangeNotificationHandler]
impl.ZkRegistryBase$InstanceStateChangeListener: CHILD_ADDED for zknode
/user-hive/llap/workers/worker-0000000207
2018-04-16T05:16:18,898 INFO [StateChangeNotificationHandler]
tez.TezSessionPool: AM for b57ddbb0-a531-4b8a-a4a0-2014c7a000d2, v.207 has
registered; updating [sessionId=b57ddbb0-a531-4b8a-a4a0-2014c7a000d2,
queueName=llap, user=hive, doAs=false, isOpen=false, isDefault=true, WM state
poolName=null, clusterFraction=0.0, queryId=null, killReason=null] with an
endpoint at 34213
2018-04-16T05:16:20,497 INFO [Thread-93]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 1/1 Reducer 2: 0/12
Reducer 3: 0/1
2018-04-16T05:16:21,004 INFO [Thread-93]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 1/1 Reducer 2:
6(+6)/12 Reducer 3: 0(+1)/1
2018-04-16T05:16:21,511 INFO [Thread-93]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 1/1 Reducer 2: 12/12
Reducer 3: 0(+1)/1
2018-04-16T05:16:21,807 INFO [Thread-93]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 1/1 Reducer 2: 12/12
Reducer 3: 1/1
2018-04-16T05:16:21,808 INFO [Thread-93] SessionState: Status: DAG finished
successfully in 5.56 seconds
2018-04-16T05:16:21,808 INFO [Thread-93] SessionState:
2018-04-16T05:16:21,809 INFO [Thread-93] SessionState: Query Execution Summary
2018-04-16T05:16:21,809 INFO [Thread-93] SessionState:
----------------------------------------------------------------------------------------------
2018-04-16T05:16:21,809 INFO [Thread-93] SessionState: OPERATION
DURATION
2018-04-16T05:16:21,809 INFO [Thread-93] SessionState:
----------------------------------------------------------------------------------------------
2018-04-16T05:16:21,809 INFO [Thread-93] SessionState: Compile Query
0.00s
2018-04-16T05:16:21,809 INFO [Thread-93] SessionState: Prepare Plan
0.00s
2018-04-16T05:16:21,809 INFO [Thread-93] SessionState: Get Query Coordinator
(AM) 0.00s
2018-04-16T05:16:21,809 INFO [Thread-93] SessionState: Submit Plan
1523855775.20s
2018-04-16T05:16:21,809 INFO [Thread-93] SessionState: Start DAG
1.05s
2018-04-16T05:16:21,810 INFO [Thread-93] SessionState: Run DAG
5.56s
2018-04-16T05:16:21,810 INFO [Thread-93] SessionState:
----------------------------------------------------------------------------------------------
2018-04-16T05:16:21,810 INFO [Thread-93] SessionState:
...
2018-04-16T05:16:21,836 INFO [Thread-93] exec.Task: Workload Manager Events
Summary
2018-04-16T05:16:21,836 INFO [Thread-93] exec.Task:
2018-04-16T05:16:21,836 INFO [Thread-93] exec.Task: QueryId:
hive_20180416051542_09f88404-982e-463d-9403-94f0f7da6195
2018-04-16T05:16:21,836 INFO [Thread-93] exec.Task: SessionId:
e09a173d-bf0d-4479-9592-be1707b33663
2018-04-16T05:16:21,836 INFO [Thread-93] exec.Task: Applied Triggers: [{ name:
trigger_5, expression: ALLUXIO_BYTES_READ > 104857600, action: MOVE TO slow },
{ name: trigger_4, expression: BYTES_READ > 104857600, action: MOVE TO slow },
{ name: trigger_2, expression: S3A_BYTES_READ > 104857600, action: MOVE TO slow
}]
2018-04-16T05:16:21,836 INFO [Thread-93] exec.Task:
----------------------------------------------------------------------------------------------
2018-04-16T05:16:21,836 INFO [Thread-93] exec.Task: EVENT
START_TIMESTAMP END_TIMESTAMP ELAPSED_MS CLUSTER % POOL
2018-04-16T05:16:21,836 INFO [Thread-93] exec.Task:
----------------------------------------------------------------------------------------------
2018-04-16T05:16:21,838 INFO [Thread-93] exec.Task: GET
2018-04-16T05:15:44.836Z 2018-04-16T05:16:14.841Z 30005 50.00
default
2018-04-16T05:16:21,838 INFO [Thread-93] exec.Task: RETURN
2018-04-16T05:16:21.836Z 2018-04-16T05:16:21.836Z 0 0.00
null
2018-04-16T05:16:21,838 INFO [Thread-93] exec.Task:
----------------------------------------------------------------------------------------------
2018-04-16T05:16:21,838 INFO [Thread-93] exec.Task:
{noformat}
However, when there's a really big query (that was defaulted to 'default'
queue, then moved to 'slow' queue), the same small query will just hang for
awhile waiting for its executors to be allocated.
{noformat}
2018-04-16T13:24:36,090 INFO [Thread-519]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 0(+1)/1 Reducer 2: 0/12
Reducer 3: 0/1
2018-04-16T13:24:36,345 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 418(+250)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:36,875 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 419(+249)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:37,407 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 426(+242)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:37,938 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 433(+235)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:38,469 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 436(+232)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:39,001 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 445(+223)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:39,105 INFO [Thread-519]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 0(+1)/1 Reducer 2: 0/12
Reducer 3: 0/1
2018-04-16T13:24:39,537 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 447(+221)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:40,075 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 458(+210)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:40,607 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 459(+209)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:41,131 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 469(+199)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:41,665 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 470(+198)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:42,119 INFO [Thread-519]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 0(+1)/1 Reducer 2: 0/12
Reducer 3: 0/1
2018-04-16T13:24:42,198 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 479(+189)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:42,729 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 480(+188)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:43,256 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 488(+180)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:43,780 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 491(+177)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:44,308 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 499(+169)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:44,839 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 500(+168)/668 Reducer 2:
0/2017 Reducer 3: 0/1
2018-04-16T13:24:45,134 INFO [Thread-519]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 0(+1)/1 Reducer 2: 0/12
Reducer 3: 0/1
2018-04-16T13:24:45,370 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 509(+159)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:45,904 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 510(+158)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:46,435 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 519(+149)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:46,957 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 520(+148)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:47,493 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 529(+139)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:48,017 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 530(+138)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:48,148 INFO [Thread-519]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 0(+1)/1 Reducer 2: 0/12
Reducer 3: 0/1
2018-04-16T13:24:48,553 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 534(+134)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:49,088 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 537(+131)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:49,624 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 543(+125)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:50,160 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 547(+121)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:50,688 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 550(+118)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:51,162 INFO [Thread-519]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 0(+1)/1 Reducer 2: 0/12
Reducer 3: 0/1
2018-04-16T13:24:51,216 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 556(+112)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:51,748 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 558(+110)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:52,286 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 566(+102)/668 Reducer 2:
0/505 Reducer 3: 0/1
^[[A2018-04-16T13:24:52,835 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 567(+101)/668
Reducer 2: 0/505 Reducer 3: 0/1
2018-04-16T13:24:53,368 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 575(+93)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:53,900 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 578(+90)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:54,177 INFO [Thread-519]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 0(+1)/1 Reducer 2: 0/12
Reducer 3: 0/1
2018-04-16T13:24:54,439 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 584(+83)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:54,978 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 588(+80)/668 Reducer 2:
0/505 Reducer 3: 0/1
2018-04-16T13:24:55,516 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 597(+71)/668 Reducer 2:
0/505 Reducer 3: 0/1
{noformat}
Eventually, the bigger query (130s) and the small query (81s) completed at the
same time.
{noformat}
[THE BIG QUERY]
2018-04-16T13:25:55,551 INFO [Thread-505]
monitoring.RenderStrategy$LogToFileFunction: Map 1: 668/668 Reducer 2: 505/505
Reducer 3: 1/1
2018-04-16T13:25:55,551 INFO [Thread-505] SessionState: Status: DAG finished
successfully in 130.81 seconds
2018-04-16T13:25:55,551 INFO [Thread-505] SessionState:
2018-04-16T13:25:55,551 INFO [Thread-505] SessionState: Query Execution Summary
2018-04-16T13:25:55,551 INFO [Thread-505] SessionState:
----------------------------------------------------------------------------------------------
2018-04-16T13:25:55,551 INFO [Thread-505] SessionState: OPERATION DURATION
2018-04-16T13:25:55,551 INFO [Thread-505] SessionState:
----------------------------------------------------------------------------------------------
2018-04-16T13:25:55,551 INFO [Thread-505] SessionState: Compile Query 0.00s
2018-04-16T13:25:55,551 INFO [Thread-505] SessionState: Prepare Plan 0.00s
2018-04-16T13:25:55,551 INFO [Thread-505] SessionState: Get Query Coordinator
(AM) 0.00s
2018-04-16T13:25:55,551 INFO [Thread-505] SessionState: Submit Plan
1523885024.15s
2018-04-16T13:25:55,551 INFO [Thread-505] SessionState: Start DAG 0.59s
2018-04-16T13:25:55,551 INFO [Thread-505] SessionState: Run DAG 130.81s
2018-04-16T13:25:55,551 INFO [Thread-505] SessionState:
----------------------------------------------------------------------------------------------
2018-04-16T13:25:55,575 INFO [Thread-505] exec.Task: Workload Manager Events
Summary
2018-04-16T13:25:55,575 INFO [Thread-505] exec.Task:
2018-04-16T13:25:55,575 INFO [Thread-505] exec.Task: QueryId:
hive_20180416132343_1e1c3e71-17a4-4ac8-932f-79fe3752142f
2018-04-16T13:25:55,575 INFO [Thread-505] exec.Task: SessionId:
6a28c56d-ec15-454b-81d3-76b65fcddf39
2018-04-16T13:25:55,575 INFO [Thread-505] exec.Task: Applied Triggers: [
{ name: trigger_5, expression: ALLUXIO_BYTES_READ > 104857600, action: MOVE TO
slow }
,
{ name: trigger_4, expression: BYTES_READ > 104857600, action: MOVE TO slow }
,
{ name: trigger_2, expression: S3A_BYTES_READ > 104857600, action: MOVE TO slow
}
]
2018-04-16T13:25:55,575 INFO [Thread-505] exec.Task:
----------------------------------------------------------------------------------------------
2018-04-16T13:25:55,575 INFO [Thread-505] exec.Task: EVENT START_TIMESTAMP
END_TIMESTAMP ELAPSED_MS CLUSTER % POOL
2018-04-16T13:25:55,575 INFO [Thread-505] exec.Task:
----------------------------------------------------------------------------------------------
2018-04-16T13:25:55,575 INFO [Thread-505] exec.Task: GET
2018-04-16T13:23:44.014Z 2018-04-16T13:23:44.014Z 0 50.00 default
2018-04-16T13:25:55,575 INFO [Thread-505] exec.Task: MOVE
2018-04-16T13:23:52.498Z 2018-04-16T13:23:52.498Z 0 50.00 slow
2018-04-16T13:25:55,575 INFO [Thread-505] exec.Task: RETURN
2018-04-16T13:25:55.575Z 2018-04-16T13:25:55.575Z 0 0.00 null
2018-04-16T13:25:55,575 INFO [Thread-505] exec.Task:
----------------------------------------------------------------------------------------------
[THE SMALL QUERY]
2018-04-16T13:25:57,619 INFO [Thread-519] SessionState: Status: DAG finished
successfully in 81.53 seconds
2018-04-16T13:25:57,619 INFO [Thread-519] SessionState:
2018-04-16T13:25:57,619 INFO [Thread-519] SessionState: Query Execution Summary
2018-04-16T13:25:57,619 INFO [Thread-519] SessionState:
----------------------------------------------------------------------------------------------
2018-04-16T13:25:57,619 INFO [Thread-519] SessionState: OPERATION DURATION
2018-04-16T13:25:57,619 INFO [Thread-519] SessionState:
----------------------------------------------------------------------------------------------
2018-04-16T13:25:57,619 INFO [Thread-519] SessionState: Compile Query 0.00s
2018-04-16T13:25:57,619 INFO [Thread-519] SessionState: Prepare Plan 0.00s
2018-04-16T13:25:57,619 INFO [Thread-519] SessionState: Get Query Coordinator
(AM) 0.00s
2018-04-16T13:25:57,619 INFO [Thread-519] SessionState: Submit Plan
1523885075.56s
2018-04-16T13:25:57,619 INFO [Thread-519] SessionState: Start DAG 0.53s
2018-04-16T13:25:57,619 INFO [Thread-519] SessionState: Run DAG 81.53s
2018-04-16T13:25:57,619 INFO [Thread-519] SessionState:
----------------------------------------------------------------------------------------------
2018-04-16T13:25:57,631 INFO [Thread-519] exec.Task: Workload Manager Events
Summary
2018-04-16T13:25:57,631 INFO [Thread-519] exec.Task:
2018-04-16T13:25:57,631 INFO [Thread-519] exec.Task: QueryId:
hive_20180416132435_e5bb8b8a-0363-4907-9128-dba90b7ddff5
2018-04-16T13:25:57,631 INFO [Thread-519] exec.Task: SessionId:
a31b58e9-83d6-430c-a5cf-f62dcea78daa
2018-04-16T13:25:57,631 INFO [Thread-519] exec.Task: Applied Triggers: [
{ name: trigger_5, expression: ALLUXIO_BYTES_READ > 104857600, action: MOVE TO
slow }
,
{ name: trigger_4, expression: BYTES_READ > 104857600, action: MOVE TO slow }
,
{ name: trigger_2, expression: S3A_BYTES_READ > 104857600, action: MOVE TO slow
}
]
2018-04-16T13:25:57,631 INFO [Thread-519] exec.Task:
----------------------------------------------------------------------------------------------
2018-04-16T13:25:57,631 INFO [Thread-519] exec.Task: EVENT START_TIMESTAMP
END_TIMESTAMP ELAPSED_MS CLUSTER % POOL
2018-04-16T13:25:57,631 INFO [Thread-519] exec.Task:
----------------------------------------------------------------------------------------------
2018-04-16T13:25:57,631 INFO [Thread-519] exec.Task: GET
2018-04-16T13:24:35.490Z 2018-04-16T13:24:35.490Z 0 50.00 default
2018-04-16T13:25:57,631 INFO [Thread-519] exec.Task: RETURN
2018-04-16T13:25:57.630Z 2018-04-16T13:25:57.630Z 0 0.00 null
2018-04-16T13:25:57,631 INFO [Thread-519] exec.Task:
----------------------------------------------------------------------------------------------
{noformat}
What I find puzzling is that the bigger the "big" query is, the longer the
smaller query will take. This indicates that either preemption is not working
or the guaranteed tasks are not working as expected. Any thoughts? Thanks!
> LLAP workload management
> ------------------------
>
> Key: HIVE-17481
> URL: https://issues.apache.org/jira/browse/HIVE-17481
> Project: Hive
> Issue Type: New Feature
> Reporter: Sergey Shelukhin
> Assignee: Sergey Shelukhin
> Priority: Major
> Fix For: 3.0.0
>
> Attachments: Workload management design doc.pdf
>
>
> This effort is intended to improve various aspects of cluster sharing for
> LLAP. Some of these are applicable to non-LLAP queries and may later be
> extended to all queries. Administrators will be able to specify and apply
> policies for workload management ("resource plans") that apply to the entire
> cluster, with only one resource plan being active at a time. The policies
> will be created and modified using new Hive DDL statements.
> The policies will cover:
> * Dividing the cluster into a set of (optionally, nested) query pools that
> are each allocated a fraction of the cluster, a set query parallelism,
> resource sharing policy between queries, and potentially others like
> priority, etc.
> * Mapping the incoming queries into pools based on the query user, groups,
> explicit configuration, etc.
> * Specifying rules that perform actions on queries based on counter values
> (e.g. killing or moving queries).
> One would also be able to switch policies on a live cluster without (usually)
> affecting running queries, including e.g. to change policies for daytime and
> nighttime usage patterns, and other similar scenarios. The switches would be
> safe and atomic; versioning may eventually be supported.
> Some implementation details:
> * WM will only be supported in HS2 (for obvious reasons).
> * All LLAP query AMs will run in "interactive" YARN queue and will be
> fungible between Hive pools.
> * We will use the concept of "guaranteed tasks" (also known as ducks) to
> enforce cluster allocation without a central scheduler and without
> compromising throughput. Guaranteed tasks preempt other (speculative) tasks
> and are distributed from HS2 to AMs, and from AMs to tasks, in accordance
> with percentage allocations in the policy. Each "duck" corresponds to a CPU
> resource on the cluster. The implementation will be isolated so as to allow
> different ones later.
> * In future, we may consider improved task placement and late binding,
> similar to the ones described in Sparrow paper, to work around potential
> hotspots/etc. that are not avoided with the decentralized scheme.
> * Only one HS2 will initially be supported to avoid split-brain workload
> management. We will also implement (in a tangential set of work items)
> active-passive HS2 recovery. Eventually, we intend to switch to full
> active-active HS2 configuration with shared WM and Tez session pool (unlike
> the current case with 2 separate session pools).
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)