[ 
https://issues.apache.org/jira/browse/HIVE-7768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengxiang Li resolved HIVE-7768.
---------------------------------
    Resolution: Done

With Spark side auto scaling feature support, we do not need to change code in 
Hive side, I would add TODOC-SPARK label and close this JIRA.

> Integrate with Spark executor scaling [Spark Branch]
> ----------------------------------------------------
>
>                 Key: HIVE-7768
>                 URL: https://issues.apache.org/jira/browse/HIVE-7768
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Brock Noland
>            Assignee: Chengxiang Li
>            Priority: Critical
>              Labels: TODOC-SPARK
>
> Scenario:
> A user connects to Hive and runs a query on a small time. Our SC is sized for 
> that small table. They then run a query on a much larger table. We'll need to 
> "re-size" the SC which I don't think Spark supports today, so we need to 
> research what is available today in Spark and how Tez works.
> More details:
> Similar to Tez, it's likely our "SparkContext" is going to be long lived and 
> process many queries. Some queries will be large and some small. Additionally 
> the SC might be idle for long periods of time.
> In this JIRA we will research the following:
> * How Spark decides the number of slaves for a given RDD today
> * Given a SC when you create a new RDD based on a much larger input dataset, 
> does the SC adjust?
> * How Tez increases/decreases the size of the running YARN application (set 
> of slaves)
> * How Tez handles scenarios when it has a running set of slaves in YARN and 
> requests more resources for a query and fails to get additional resources
> * How Tez decides to timeout idle slaves
> This will guide requirements we'll need from Spark.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to