[ 
https://issues.apache.org/jira/browse/HIVE-9428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14294055#comment-14294055
 ] 

Chao commented on HIVE-9428:
----------------------------

This works for local spark client, but for remote spark client, it seems the 
issue is still not solved.
Here's what I get when running {{auto_join25.q}}:

{code}
...
2015-01-27 11:22:31,582 DEBUG [RPC-Handler-3]: rpc.RpcDispatcher 
(RpcDispatcher.java:channelRead0(74)) - [ClientProtocol] Received RPC message: 
type=REPLY id=224 payload=org.apache.spark.SparkStageInfoImpl
2015-01-27 11:22:31,582 INFO  [main]: status.SparkJobMonitor 
(SessionState.java:printInfo(852)) - 2015-01-27 11:22:31,582 Stage-53_0: 
0(+0,-4)/1
2015-01-27 11:22:31,583 INFO  [main]: status.SparkJobMonitor 
(SessionState.java:printInfo(852)) - Status: Finished successfully in 1.01 
seconds
2015-01-27 11:22:31,583 INFO  [main]: exec.Task (SparkTask.java:execute(111)) - 
=====Spark Job[79f7a3e0-b4d8-4358-bb1a-7afed77a3a0d] statistics=====
2015-01-27 11:22:31,583 INFO  [main]: exec.Task 
(SparkTask.java:logSparkStatistic(144)) - Spark 
Job[79f7a3e0-b4d8-4358-bb1a-7afed77a3a0d] Metrics
2015-01-27 11:22:31,583 INFO  [main]: exec.Task 
(SparkTask.java:logSparkStatistic(148)) -   EexcutorDeserializeTime: 0
2015-01-27 11:22:31,583 INFO  [main]: exec.Task 
(SparkTask.java:logSparkStatistic(148)) -   ExecutorRunTime: 0
2015-01-27 11:22:31,583 INFO  [main]: exec.Task 
(SparkTask.java:logSparkStatistic(148)) -   ResultSize: 0
2015-01-27 11:22:31,583 INFO  [main]: exec.Task 
(SparkTask.java:logSparkStatistic(148)) -   JvmGCTime: 0
2015-01-27 11:22:31,583 INFO  [main]: exec.Task 
(SparkTask.java:logSparkStatistic(148)) -   ResultSerializationTime: 0
2015-01-27 11:22:31,583 INFO  [main]: exec.Task 
(SparkTask.java:logSparkStatistic(148)) -   MemoryBytesSpilled: 0
2015-01-27 11:22:31,583 INFO  [main]: exec.Task 
(SparkTask.java:logSparkStatistic(148)) -   DiskBytesSpilled: 0
2015-01-27 11:22:31,583 INFO  [main]: exec.Task 
(SparkTask.java:logSparkStatistic(144)) - HIVE
2015-01-27 11:22:31,584 INFO  [main]: exec.Task 
(SparkTask.java:logSparkStatistic(148)) -   CREATED_FILES: 0
2015-01-27 11:22:31,584 INFO  [main]: exec.Task 
(SparkTask.java:logSparkStatistic(148)) -   RECORDS_IN: 0
2015-01-27 11:22:31,584 INFO  [main]: exec.Task 
(SparkTask.java:logSparkStatistic(148)) -   DESERIALIZE_ERRORS: 0
2015-01-27 11:22:31,584 INFO  [main]: ql.Driver 
(SessionState.java:printInfo(852)) - Launching Job 2 out of 2
2015-01-27 11:22:31,584 INFO  [main]: ql.Driver (Driver.java:launchTask(1636)) 
- Starting task [Stage-1:MAPRED] in serial mode
2015-01-27 11:22:31,584 INFO  [main]: exec.Task 
(SessionState.java:printInfo(852)) - In order to change the average load for a 
reducer (in bytes):
2015-01-27 11:22:31,584 INFO  [main]: exec.Task 
(SessionState.java:printInfo(852)) -   set 
hive.exec.reducers.bytes.per.reducer=<number>
2015-01-27 11:22:31,584 INFO  [main]: exec.Task 
(SessionState.java:printInfo(852)) - In order to limit the maximum number of 
reducers:
...
{code}


> LocalSparkJobStatus may return failed job as successful [Spark Branch]
> ----------------------------------------------------------------------
>
>                 Key: HIVE-9428
>                 URL: https://issues.apache.org/jira/browse/HIVE-9428
>             Project: Hive
>          Issue Type: Sub-task
>          Components: Spark
>            Reporter: Rui Li
>            Assignee: Rui Li
>            Priority: Minor
>             Fix For: spark-branch
>
>         Attachments: HIVE-9428.1-spark.patch, HIVE-9428.2-spark.patch, 
> HIVE-9428.3-spark.patch
>
>
> Future is done doesn't necessarily mean the job is successful. We should rely 
> on SparkJobInfo to get job status whenever it's available.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to