[jira] [Commented] (HIVE-19878) Hive On Spark support AM shut down when there is no job submit

2018-08-05 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16569405#comment-16569405
 ] 

Hive QA commented on HIVE-19878:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12927758/HIVE-19878.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 59 failed/errored test(s), 14050 tests 
executed
*Failed tests:*
{noformat}
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=186)

[infer_bucket_sort_reducers_power_two.q,list_bucket_dml_10.q,orc_merge9.q,leftsemijoin_mr.q,bucket6.q,bucketmapjoin7.q,uber_reduce.q,empty_dir_in_table.q,vector_outer_join2.q,spark_explain_groupbyshuffle.q,spark_dynamic_partition_pruning.q,spark_combine_equivalent_work.q,orc_merge1.q,spark_use_op_stats.q,orc_merge_diff_fs.q,quotedid_smb.q,truncate_column_buckets.q,spark_vectorized_dynamic_partition_pruning.q,spark_in_process_launcher.q,orc_merge3.q]
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=187)

[infer_bucket_sort_num_buckets.q,gen_udf_example_add10.q,spark_explainuser_1.q,spark_use_ts_stats_for_mapjoin.q,orc_merge6.q,orc_merge5.q,bucketmapjoin6.q,spark_opt_shuffle_serde.q,temp_table_external.q,spark_dynamic_partition_pruning_6.q,dynamic_rdd_cache.q,auto_sortmerge_join_16.q,vector_outer_join3.q,spark_dynamic_partition_pruning_7.q,schemeAuthority.q,parallel_orderby.q,vector_outer_join1.q,load_hdfs_file_with_space_in_the_name.q,spark_dynamic_partition_pruning_recursive_mapjoin.q,spark_dynamic_partition_pruning_mapjoin_only.q]
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=188)

[insert_overwrite_directory2.q,spark_dynamic_partition_pruning_4.q,import_exported_table.q,vector_outer_join0.q,bucket4.q,orc_merge4.q,infer_bucket_sort_merge.q,orc_merge_incompat1.q,root_dir_external_table.q,constprog_partitioner.q,constprog_semijoin.q,external_table_with_space_in_location_path.q,spark_constprog_dpp.q,spark_dynamic_partition_pruning_3.q,load_fs2.q,infer_bucket_sort_map_operators.q,spark_dynamic_partition_pruning_2.q,vector_inner_join.q,spark_multi_insert_parallel_orderby.q,remote_script.q]
TestMiniSparkOnYarnCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=189)

[scriptfile1.q,vector_outer_join5.q,file_with_header_footer.q,input16_cc.q,bucket5.q,orc_merge2.q,reduce_deduplicate.q,schemeAuthority2.q,spark_dynamic_partition_pruning_5.q,orc_merge8.q,orc_merge_incompat2.q,infer_bucket_sort_bucketed_table.q,vector_outer_join4.q,disable_merge_for_bucketing.q,orc_merge7.q]
TestSparkCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=108)

[bucketmapjoin4.q,bucket_map_join_spark4.q,union21.q,groupby2_noskew.q,timestamp_2.q,date_join1.q,mergejoins.q,smb_mapjoin_11.q,auto_sortmerge_join_3.q,mapjoin_test_outer.q,vectorization_9.q,merge2.q,groupby6_noskew.q,auto_join_without_localtask.q,multi_join_union.q]
TestSparkCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=109)

[join_cond_pushdown_unqual4.q,union_remove_7.q,join13.q,join_vc.q,groupby_cube1.q,parquet_vectorization_2.q,bucket_map_join_spark2.q,sample3.q,smb_mapjoin_19.q,union23.q,union.q,union31.q,cbo_udf_udaf.q,ptf_decimal.q,bucketmapjoin2.q]
TestSparkCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=110)

[parallel_join1.q,union27.q,union12.q,groupby7_map_multi_single_reducer.q,varchar_join1.q,join7.q,join_reorder4.q,skewjoinopt2.q,bucketsortoptimize_insert_2.q,smb_mapjoin_17.q,script_env_var1.q,groupby7_map.q,bucketsortoptimize_insert_8.q,stats16.q,union20.q]
TestSparkCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=111)

[ptf_general_queries.q,auto_join_reordering_values.q,sample2.q,join1.q,decimal_join.q,mapjoin_subquery2.q,join32_lessSize.q,mapjoin1.q,skewjoinopt18.q,union_remove_18.q,join25.q,groupby3.q,groupby9.q,bucketsortoptimize_insert_6.q,ctas.q]
TestSparkCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=112)

[groupby_map_ppr.q,nullgroup4_multi_distinct.q,join_rc.q,union14.q,order2.q,smb_mapjoin_12.q,vector_cast_constant.q,union_remove_4.q,parquet_vectorization_1.q,auto_join11.q,udaf_collect_set.q,vectorization_12.q,groupby_sort_skew_1_23.q,smb_mapjoin_25.q,skewjoinopt12.q]
TestSparkCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=113)

[skewjoinopt15.q,auto_join18.q,list_bucket_dml_2.q,input1_limit.q,load_dyn_part3.q,union_remove_14.q,auto_sortmerge_join_14.q,auto_sortmerge_join_15.q,union10.q,bucket_map_join_tez2.q,groupby5_map_skew.q,load_dyn_part7.q,join_reorder.q,bucketmapjoin8.q,union34.q]
TestSparkCliDriver - did not produce a TEST-*.xml file (likely timed

[jira] [Commented] (HIVE-19878) Hive On Spark support AM shut down when there is no job submit

2018-08-04 Thread Hive QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16569393#comment-16569393
 ] 

Hive QA commented on HIVE-19878:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
41s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
27s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
38s{color} | {color:blue} common in master has 64 extant Findbugs warnings. 
{color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 
27s{color} | {color:blue} spark-client in master has 10 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
18s{color} | {color:red} common: The patch generated 3 new + 424 unchanged - 0 
fixed = 427 total (was 424) {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
12s{color} | {color:red} spark-client: The patch generated 5 new + 34 unchanged 
- 1 fixed = 39 total (was 35) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
37s{color} | {color:red} spark-client generated 2 new + 10 unchanged - 0 fixed 
= 12 total (was 10) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 26s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:spark-client |
|  |  Boxing/unboxing to parse a primitive new 
org.apache.hive.spark.client.RemoteDriver(String[])  At RemoteDriver.java:new 
org.apache.hive.spark.client.RemoteDriver(String[])  At RemoteDriver.java:[line 
174] |
|  |  Inconsistent synchronization of 
org.apache.hive.spark.client.RemoteDriver$ClientListener.lastDAGCompletionTime; 
locked 66% of time  Unsynchronized access at RemoteDriver.java:66% of time  
Unsynchronized access at RemoteDriver.java:[line 560] |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-13046/dev-support/hive-personality.sh
 |
| git revision | master / df5caa0 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13046/yetus/diff-checkstyle-common.txt
 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13046/yetus/diff-checkstyle-spark-client.txt
 |
| findbugs | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13046/yetus/new-findbugs-spark-client.html
 |
| modules | C: common spark-client U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-13046/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Hive On Spark support AM shut down when there is no job submit
>

[jira] [Commented] (HIVE-19878) Hive On Spark support AM shut down when there is no job submit

2018-07-09 Thread Aihua Xu (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16537343#comment-16537343
 ] 

Aihua Xu commented on HIVE-19878:
-

That's right. Also need to mark "Patch available".

> Hive On Spark support AM shut down when there is no job submit
> --
>
> Key: HIVE-19878
> URL: https://issues.apache.org/jira/browse/HIVE-19878
> Project: Hive
>  Issue Type: New Feature
>  Components: Spark
>Reporter: Song Jun
>Assignee: Song Jun
>Priority: Minor
> Attachments: HIVE-19878.1.patch
>
>
> the Application Master of Hive on Spark always live on the yarn if the Hive 
> client do not exit(such as one session in HiveServer2), which will accupy 
> lots of resources, we should control the AM shut down when there is no more 
> jobs submit.
> Now Tez use the param  ` tez.session.am.dag.submit.timeout.secs` to control 
> the DAGAppMaster on the yarn to shut down.
> So here Spark need to do this too.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19878) Hive On Spark support AM shut down when there is no job submit

2018-07-09 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16537328#comment-16537328
 ] 

Sahil Takiar commented on HIVE-19878:
-

[~windpiger] you also need to mark you patch as "Patch Available" - you can 
follow the directions here: 
https://cwiki.apache.org/confluence/display/Hive/HiveDeveloperFAQ#HiveDeveloperFAQ-HowdoIrunprecommittestsonapatch?

> Hive On Spark support AM shut down when there is no job submit
> --
>
> Key: HIVE-19878
> URL: https://issues.apache.org/jira/browse/HIVE-19878
> Project: Hive
>  Issue Type: New Feature
>  Components: Spark
>Reporter: Song Jun
>Assignee: Song Jun
>Priority: Minor
> Attachments: HIVE-19878.1.patch
>
>
> the Application Master of Hive on Spark always live on the yarn if the Hive 
> client do not exit(such as one session in HiveServer2), which will accupy 
> lots of resources, we should control the AM shut down when there is no more 
> jobs submit.
> Now Tez use the param  ` tez.session.am.dag.submit.timeout.secs` to control 
> the DAGAppMaster on the yarn to shut down.
> So here Spark need to do this too.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19878) Hive On Spark support AM shut down when there is no job submit

2018-07-09 Thread Aihua Xu (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16537322#comment-16537322
 ] 

Aihua Xu commented on HIVE-19878:
-

[~windpiger] I guess you need to rebase your patch and reattach. The build 
probably failed during applying the patch. Sorry for the late reply. Didn't see 
the message.

> Hive On Spark support AM shut down when there is no job submit
> --
>
> Key: HIVE-19878
> URL: https://issues.apache.org/jira/browse/HIVE-19878
> Project: Hive
>  Issue Type: New Feature
>  Components: Spark
>Reporter: Song Jun
>Assignee: Song Jun
>Priority: Minor
> Attachments: HIVE-19878.1.patch
>
>
> the Application Master of Hive on Spark always live on the yarn if the Hive 
> client do not exit(such as one session in HiveServer2), which will accupy 
> lots of resources, we should control the AM shut down when there is no more 
> jobs submit.
> Now Tez use the param  ` tez.session.am.dag.submit.timeout.secs` to control 
> the DAGAppMaster on the yarn to shut down.
> So here Spark need to do this too.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19878) Hive On Spark support AM shut down when there is no job submit

2018-06-14 Thread Song Jun (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16513424#comment-16513424
 ] 

Song Jun commented on HIVE-19878:
-

[~aihuaxu] How can I trigger the test robot? thanks!

> Hive On Spark support AM shut down when there is no job submit
> --
>
> Key: HIVE-19878
> URL: https://issues.apache.org/jira/browse/HIVE-19878
> Project: Hive
>  Issue Type: New Feature
>  Components: Spark
>Reporter: Song Jun
>Assignee: Song Jun
>Priority: Minor
> Attachments: HIVE-19878.1.patch
>
>
> the Application Master of Hive on Spark always live on the yarn if the Hive 
> client do not exit(such as one session in HiveServer2), which will accupy 
> lots of resources, we should control the AM shut down when there is no more 
> jobs submit.
> Now Tez use the param  ` tez.session.am.dag.submit.timeout.secs` to control 
> the DAGAppMaster on the yarn to shut down.
> So here Spark need to do this too.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19878) Hive On Spark support AM shut down when there is no job submit

2018-06-13 Thread Song Jun (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16511894#comment-16511894
 ] 

Song Jun commented on HIVE-19878:
-

[~aihuaxu]  thanks! I have rename the patch.

> Hive On Spark support AM shut down when there is no job submit
> --
>
> Key: HIVE-19878
> URL: https://issues.apache.org/jira/browse/HIVE-19878
> Project: Hive
>  Issue Type: New Feature
>  Components: Spark
>Reporter: Song Jun
>Assignee: Song Jun
>Priority: Minor
> Attachments: HIVE-19878.1.patch
>
>
> the Application Master of Hive on Spark always live on the yarn if the Hive 
> client do not exit(such as one session in HiveServer2), which will accupy 
> lots of resources, we should control the AM shut down when there is no more 
> jobs submit.
> Now Tez use the param  ` tez.session.am.dag.submit.timeout.secs` to control 
> the DAGAppMaster on the yarn to shut down.
> So here Spark need to do this too.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19878) Hive On Spark support AM shut down when there is no job submit

2018-06-13 Thread Aihua Xu (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16511635#comment-16511635
 ] 

Aihua Xu commented on HIVE-19878:
-

[~windpiger] I checked the patch and it's promising. Do you want to contribute 
to the patch? If so, can you assign to yourself and attach the patch with the 
name HIVE-19878.1.patch to trigger the pre-commit build? Thanks.

> Hive On Spark support AM shut down when there is no job submit
> --
>
> Key: HIVE-19878
> URL: https://issues.apache.org/jira/browse/HIVE-19878
> Project: Hive
>  Issue Type: New Feature
>  Components: Spark
>Reporter: Song Jun
>Priority: Minor
> Attachments: HIVE-19878.patch.1
>
>
> the Application Master of Hive on Spark always live on the yarn if the Hive 
> client do not exit(such as one session in HiveServer2), which will accupy 
> lots of resources, we should control the AM shut down when there is no more 
> jobs submit.
> Now Tez use the param  ` tez.session.am.dag.submit.timeout.secs` to control 
> the DAGAppMaster on the yarn to shut down.
> So here Spark need to do this too.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19878) Hive On Spark support AM shut down when there is no job submit

2018-06-13 Thread Sahil Takiar (JIRA)


[ 
https://issues.apache.org/jira/browse/HIVE-19878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16511174#comment-16511174
 ] 

Sahil Takiar commented on HIVE-19878:
-

[~aihuaxu] this looks similar to HIVE-14162, could you take a look?

> Hive On Spark support AM shut down when there is no job submit
> --
>
> Key: HIVE-19878
> URL: https://issues.apache.org/jira/browse/HIVE-19878
> Project: Hive
>  Issue Type: New Feature
>  Components: Spark
>Reporter: Song Jun
>Priority: Minor
> Attachments: HIVE-19878.patch.1
>
>
> the Application Master of Hive on Spark always live on the yarn if the Hive 
> client do not exit(such as one session in HiveServer2), which will accupy 
> lots of resources, we should control the AM shut down when there is no more 
> jobs submit.
> Now Tez use the param  ` tez.session.am.dag.submit.timeout.secs` to control 
> the DAGAppMaster on the yarn to shut down.
> So here Spark need to do this too.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)