-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/58865/#review173556
-----------------------------------------------------------




ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java
Lines 135 (patched)
<https://reviews.apache.org/r/58865/#comment246543>

    The log is incorrect because cancelling the job doesn't mean killing the 
application.



ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
Lines 106 (patched)
<https://reviews.apache.org/r/58865/#comment246544>

    I think the total task count needs only be computed once. It shouldn't 
change during the execution of the job, assuming we don't count failed/retried 
tasks.


- Rui Li


On May 1, 2017, 5:13 p.m., Xuefu Zhang wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/58865/
> -----------------------------------------------------------
> 
> (Updated May 1, 2017, 5:13 p.m.)
> 
> 
> Review request for hive.
> 
> 
> Bugs: HIVE-16552
>     https://issues.apache.org/jira/browse/HIVE-16552
> 
> 
> Repository: hive-git
> 
> 
> Description
> -------
> 
> See JIRA description
> 
> 
> Diffs
> -----
> 
>   common/src/java/org/apache/hadoop/hive/conf/HiveConf.java d3ea824 
>   ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkTask.java 32a7730 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/RemoteSparkJobMonitor.java
>  dd73f3e 
>   
> ql/src/java/org/apache/hadoop/hive/ql/exec/spark/status/SparkJobMonitor.java 
> 0b224f2 
> 
> 
> Diff: https://reviews.apache.org/r/58865/diff/2/
> 
> 
> Testing
> -------
> 
> Test locally
> 
> 
> Thanks,
> 
> Xuefu Zhang
> 
>

Reply via email to