[jira] [Assigned] (HIVE-19147) Fix TestTezPerfCliDriver

2018-04-10 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-19147:
---


> Fix TestTezPerfCliDriver
> 
>
> Key: HIVE-19147
> URL: https://issues.apache.org/jira/browse/HIVE-19147
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> it seems the baked metastore dump misses the CAT_NAME field added by some 
> recent metastore change



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19148) derby/MV_CREATION_METADATA misses CAT_NAME column

2018-04-10 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-19148:
---


> derby/MV_CREATION_METADATA misses CAT_NAME column
> -
>
> Key: HIVE-19148
> URL: https://issues.apache.org/jira/browse/HIVE-19148
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> It seems like the upgrade patches for derby misses CAT_NAME for the table 
> MV_CREATION_METADATA



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19148) derby/MV_CREATION_METADATA misses CAT_NAME column

2018-04-10 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-19148:

Attachment: HIVE-19148.01.patch

> derby/MV_CREATION_METADATA misses CAT_NAME column
> -
>
> Key: HIVE-19148
> URL: https://issues.apache.org/jira/browse/HIVE-19148
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19148.01.patch
>
>
> It seems like the upgrade patches for derby misses CAT_NAME for the table 
> MV_CREATION_METADATA



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19148) derby/MV_CREATION_METADATA misses CAT_NAME column

2018-04-10 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431818#comment-16431818
 ] 

Zoltan Haindrich commented on HIVE-19148:
-

I was looking into perf driver problems when I've noticed this...
[~alangates] does this small upgrade script is missing? ...or its covered ; but 
I've missed it how :)


> derby/MV_CREATION_METADATA misses CAT_NAME column
> -
>
> Key: HIVE-19148
> URL: https://issues.apache.org/jira/browse/HIVE-19148
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19148.01.patch
>
>
> It seems like the upgrade patches for derby misses CAT_NAME for the table 
> MV_CREATION_METADATA



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15944) The order of cols is error in ColumnPrunerReduceSinkProc because of sort operator

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431839#comment-16431839
 ] 

Hive QA commented on HIVE-15944:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12856787/HIVE-15944.8.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 93 failed/errored test(s), 13653 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=252)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=252)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=252)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=252)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)

[nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,check_constraint_subquery.q,load_wrong_fileformat.q,check_constraint_udtf.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,authorization_insert_noinspriv.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,check_constraint_nonboolean_expr.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_col

[jira] [Commented] (HIVE-18811) Fix desc table, column comments are not displayed

2018-04-10 Thread tartarus (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431840#comment-16431840
 ] 

tartarus commented on HIVE-18811:
-

[~vgarg]   Thank you for your reply!  3.1.0 is ok!

 

> Fix desc table, column comments are not displayed
> -
>
> Key: HIVE-18811
> URL: https://issues.apache.org/jira/browse/HIVE-18811
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.1, 2.3.2
> Environment: CentOS 6.5
> Hive-1.2.1
> Hive-3.0.0
>Reporter: tartarus
>Assignee: tartarus
>Priority: Major
>  Labels: patch
> Fix For: 3.1.0
>
> Attachments: HIVE_18811.2.patch, HIVE_18811.patch, changes, changes.2
>
>
> when column comment contain \t 
> eg: CREATE TABLE `zhangmang_test`(`name` string COMMENT 
> 'name{color:#ff}\t{color}zm');
> then execute : {color:#ff}desc zhangmang_test {color}
> {color:#ff}{color:#33}return :{color} name                string      
>         name{color}
> because \t is the separator, so we should translate it



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18946) Fix columnstats merge NPE

2018-04-10 Thread Laszlo Bodor (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431841#comment-16431841
 ] 

Laszlo Bodor commented on HIVE-18946:
-

[~ashutoshc] : i've already created HIVE-19131 for that. The logic was already 
bad in that function, so the fix could introduce other differences (q.outs, not 
sure).

> Fix columnstats merge NPE
> -
>
> Key: HIVE-18946
> URL: https://issues.apache.org/jira/browse/HIVE-18946
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-18946.01.patch, HIVE-18946.02.patch
>
>
> after analyzing an empty table may lead to an NPE when inserting into it...
> {code}
> 2018-03-13T06:54:22,503 ERROR [df3fb505-e0bc-4595-a874-b735dab8dff6 main] 
> metastore.RetryingHMSHandler: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.metastore.api.Decimal.compareTo(Decimal.java:318)
> at 
> org.apache.hadoop.hive.metastore.columnstats.merge.DecimalColumnStatsMerger.merge(DecimalColumnStatsMerger.java:35)
> at 
> org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.mergeColStats(MetaStoreUtils.java:778)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.set_aggr_stats_for(HiveMetaStore.java:6934)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
> at com.sun.proxy.$Proxy55.set_aggr_stats_for(Unknown Source)
> [...]
> {code}
> reproduce
> {code}
> set hive.stats.autogather=true;
> set hive.explain.user=true;
> drop table if exists testdeci2;
> create table testdeci2(
> id int,
> amount decimal(10,3),
> sales_tax decimal(10,3),
> item string)
> stored as orc location '/tmp/testdeci2'
> TBLPROPERTIES ("transactional"="false")
> ;
> analyze table testdeci2 compute statistics for columns;
> insert into table testdeci2 
> values(1,12.123,12345.123,'desk1'),(2,123.123,1234.123,'desk2');
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17033) Miss the jar when create slider package for llap

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17033?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431842#comment-16431842
 ] 

Hive QA commented on HIVE-17033:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12875699/HIVE-17033.001.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10108/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10108/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10108/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:25:19.826
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10108/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:25:19.861
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at dcd9b59 HIVE-19146 : Delete dangling q.out
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at dcd9b59 HIVE-19146 : Delete dangling q.out
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:25:29.366
+ rm -rf ../yetus_PreCommit-HIVE-Build-10108
+ mkdir ../yetus_PreCommit-HIVE-Build-10108
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10108
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10108/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapServiceDriver.java: 
does not exist in index
error: patch failed: 
llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapServiceDriver.java:397
Falling back to three-way merge...
Applied patch to 
'llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapServiceDriver.java' 
with conflicts.
Going to apply patch with: git apply -p1
error: patch failed: 
llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapServiceDriver.java:397
Falling back to three-way merge...
Applied patch to 
'llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapServiceDriver.java' 
with conflicts.
U llap-server/src/java/org/apache/hadoop/hive/llap/cli/LlapServiceDriver.java
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12875699 - PreCommit-HIVE-Build

> Miss the jar when create slider package for llap
> 
>
> Key: HIVE-17033
> URL: https://issues.apache.org/jira/browse/HIVE-17033
> Project: Hive
>  Issue Type: Bug
>  Components: llap
>Affects Versions: 3.0.0
>Reporter: Colin Ma
>Assignee: Colin Ma
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-17033.001.patch
>
>
> When create the slider package for llap, the jar for log4j-1.2-api is missed. 
> The root cause is org.apache.log4j.NDC used to get jar of log4j-1.2-api, but 
> this class is also existed in the jar of log4j. So, the jar of log4j-1.2-api 
> won't be included.
> As a result, log4j-1.2-api-2.6.2.jar can't be found in llap-2.2.0-S.zip.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17502) Reuse of default session should not throw an exception in LLAP w/ Tez

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431843#comment-16431843
 ] 

Hive QA commented on HIVE-17502:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12890993/HIVE-17502.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10109/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10109/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10109/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:28:05.554
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10109/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:28:05.557
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at dcd9b59 HIVE-19146 : Delete dangling q.out
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at dcd9b59 HIVE-19146 : Delete dangling q.out
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:28:06.110
+ rm -rf ../yetus_PreCommit-HIVE-Build-10109
+ mkdir ../yetus_PreCommit-HIVE-Build-10109
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10109
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10109/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
common/src/java/org/apache/hadoop/hive/conf/HiveConf.java:2425
Falling back to three-way merge...
Applied patch to 'common/src/java/org/apache/hadoop/hive/conf/HiveConf.java' 
with conflicts.
Going to apply patch with: git apply -p0
error: patch failed: 
common/src/java/org/apache/hadoop/hive/conf/HiveConf.java:2425
Falling back to three-way merge...
Applied patch to 'common/src/java/org/apache/hadoop/hive/conf/HiveConf.java' 
with conflicts.
U common/src/java/org/apache/hadoop/hive/conf/HiveConf.java
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12890993 - PreCommit-HIVE-Build

> Reuse of default session should not throw an exception in LLAP w/ Tez
> -
>
> Key: HIVE-17502
> URL: https://issues.apache.org/jira/browse/HIVE-17502
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Tez
>Affects Versions: 2.1.1, 2.2.0
> Environment: HDP 2.6.1.0-129, Hue 4
>Reporter: Thai Bui
>Assignee: Thai Bui
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-17502.patch
>
>
> Hive2 w/ LLAP on Tez doesn't allow a currently used, default session to be 
> skipped mostly because of this line 
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L365.
> However, some clients such as Hue 4, allow multiple sessions to be used per 
> user. Under this configuration, a Thrift client will send a request to either 
> reuse or open a new session. The reuse request could include the session id 
> of a currently used snippet being executed in Hue, this causes HS2 to throw 
> an exception:
> {noformat}
> 2017-09-10T17:51:36,548 INFO  [Thread-89]: tez.TezSessionPoolManager 
> (TezSessionPoolManager.java:canWorkWithSameSession(512)) - The current user: 
> hive, session user: hive
> 2017-09-10T17:51:36,549 ERROR [Thread-89]: exec.Task 
> (TezTask.java:execute(232)) - Failed to execute tez graph.
> or

[jira] [Commented] (HIVE-14304) Beeline command will fail when entireLineAsCommand set to true

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431846#comment-16431846
 ] 

Hive QA commented on HIVE-14304:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12819253/HIVE-14304.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10110/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10110/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10110/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:29:10.466
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10110/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:29:10.469
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at dcd9b59 HIVE-19146 : Delete dangling q.out
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at dcd9b59 HIVE-19146 : Delete dangling q.out
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:29:10.988
+ rm -rf ../yetus_PreCommit-HIVE-Build-10110
+ mkdir ../yetus_PreCommit-HIVE-Build-10110
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10110
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10110/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: beeline/src/java/org/apache/hive/beeline/Commands.java:1130
Falling back to three-way merge...
Applied patch to 'beeline/src/java/org/apache/hive/beeline/Commands.java' with 
conflicts.
Going to apply patch with: git apply -p0
error: patch failed: beeline/src/java/org/apache/hive/beeline/Commands.java:1130
Falling back to three-way merge...
Applied patch to 'beeline/src/java/org/apache/hive/beeline/Commands.java' with 
conflicts.
U beeline/src/java/org/apache/hive/beeline/Commands.java
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12819253 - PreCommit-HIVE-Build

> Beeline command will fail when entireLineAsCommand set to true
> --
>
> Key: HIVE-14304
> URL: https://issues.apache.org/jira/browse/HIVE-14304
> Project: Hive
>  Issue Type: Bug
>  Components: Beeline
>Affects Versions: 1.3.0, 2.2.0
>Reporter: Niklaus Xiao
>Assignee: Hari Sankar Sivarama Subramaniyan
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-14304.1.patch
>
>
> Use beeline
> {code}
> beeline --entireLineAsCommand=true
> {code}
> show tables fail:
> {code}
> 0: jdbc:hive2://189.39.151.44:21066/> show tables;
> Error: Error while compiling statement: FAILED: ParseException line 1:11 
> extraneous input ';' expecting EOF near '' (state=42000,code=4)
> {code}
> We should remove the trailing semi-colon.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19075) Fix NPE when trying to drop or get DB with null name

2018-04-10 Thread Marta Kuczora (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431845#comment-16431845
 ] 

Marta Kuczora commented on HIVE-19075:
--

Thanks a lot [~pvary] for committing the patch.

> Fix NPE when trying to drop or get DB with null name
> 
>
> Key: HIVE-19075
> URL: https://issues.apache.org/jira/browse/HIVE-19075
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HIVE-19075.1.patch, HIVE-19075.2.patch, 
> HIVE-19075.3.patch
>
>
> The TestDatabases tests revealed that NPE is thrown if the get_database_core 
> and drop_database_core methods are called with null DB name. These NPEs could 
> be prevented with a simple null check and a MetaException with a proper error 
> message should be thrown instead.
> Example: NPE is thrown in the following test cases
>  * TestDatabases.testGetDatabaseNullName
>  * TestDatabases.testDropDatabaseNullName



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19119) Fix the TestAppendPartitions tests which are failing in the pre-commit runs

2018-04-10 Thread Marta Kuczora (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19119?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431844#comment-16431844
 ] 

Marta Kuczora commented on HIVE-19119:
--

Thanks a lot [~alangates] for committing the patch.

> Fix the TestAppendPartitions tests which are failing in the pre-commit runs
> ---
>
> Key: HIVE-19119
> URL: https://issues.apache.org/jira/browse/HIVE-19119
> Project: Hive
>  Issue Type: Bug
>  Components: Test
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Minor
> Fix For: 3.0.0
>
> Attachments: HIVE-19119.1.patch
>
>
> The test got fixed in 
> [HIVE-19060|https://issues.apache.org/jira/browse/HIVE-19060], but the fix 
> got overwritten by an other commit, so  the testAppendPartitionNullPartValues 
> and testAppendPartitionEmptyPartValues test cases are failing again.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-14925) MSCK repair table hang while running with multi threading enabled

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431848#comment-16431848
 ] 

Hive QA commented on HIVE-14925:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12832969/HIVE-14925.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10111/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10111/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10111/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:30:17.238
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10111/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:30:17.240
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at dcd9b59 HIVE-19146 : Delete dangling q.out
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at dcd9b59 HIVE-19146 : Delete dangling q.out
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:30:17.774
+ rm -rf ../yetus_PreCommit-HIVE-Build-10111
+ mkdir ../yetus_PreCommit-HIVE-Build-10111
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10111
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10111/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMetaStoreChecker.java: 
does not exist in index
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMetaStoreChecker.java:426
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMetaStoreChecker.java' with 
conflicts.
Going to apply patch with: git apply -p1
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMetaStoreChecker.java:426
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMetaStoreChecker.java' with 
conflicts.
U ql/src/java/org/apache/hadoop/hive/ql/metadata/HiveMetaStoreChecker.java
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12832969 - PreCommit-HIVE-Build

> MSCK repair table hang while running with multi threading enabled
> -
>
> Key: HIVE-14925
> URL: https://issues.apache.org/jira/browse/HIVE-14925
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 2.2.0
>Reporter: Ratheesh Kamoor
>Assignee: Ratheesh Kamoor
>Priority: Critical
> Fix For: 3.1.0
>
> Attachments: HIVE-14925.patch
>
>
> MSCK REPAIR TABLE hanging while running with multi-threading enabled 
> (default). I think it is because of a major design flaw in how thread pool 
> implemented in HiveMetaSoreChecker class / checkPartitionDirs method. This 
> method has a thread pool which register Callable but callable makes a 
> recursive call to checkPartitionDirs method again. This code will hang when 
> number of directories is more than thread pool size. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15222) replace org.json usage in ExplainTask/TezTask related classes with some alternative

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431851#comment-16431851
 ] 

Hive QA commented on HIVE-15222:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12851538/HIVE-15222.3.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10112/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10112/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10112/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:31:23.001
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10112/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:31:23.003
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at dcd9b59 HIVE-19146 : Delete dangling q.out
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at dcd9b59 HIVE-19146 : Delete dangling q.out
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 07:31:23.483
+ rm -rf ../yetus_PreCommit-HIVE-Build-10112
+ mkdir ../yetus_PreCommit-HIVE-Build-10112
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10112
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10112/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: common/src/java/org/apache/hadoop/hive/common/jsonexplain/tez/Op.java: 
does not exist in index
error: 
common/src/java/org/apache/hadoop/hive/common/jsonexplain/tez/Stage.java: does 
not exist in index
error: patch failed: 
common/src/java/org/apache/hadoop/hive/common/jsonexplain/tez/TezJsonParser.java:27
Falling back to three-way merge...
Applied patch to 
'common/src/java/org/apache/hadoop/hive/common/jsonexplain/tez/TezJsonParser.java'
 with conflicts.
error: 
common/src/java/org/apache/hadoop/hive/common/jsonexplain/tez/Vertex.java: does 
not exist in index
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java:20
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java' 
with conflicts.
error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java:32
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java' 
with conflicts.
error: src/java/org/apache/hadoop/hive/common/jsonexplain/JsonParser.java: does 
not exist in index
error: src/java/org/apache/hadoop/hive/common/jsonexplain/tez/Op.java: does not 
exist in index
error: src/java/org/apache/hadoop/hive/common/jsonexplain/tez/Stage.java: does 
not exist in index
error: 
src/java/org/apache/hadoop/hive/common/jsonexplain/tez/TezJsonParser.java: does 
not exist in index
error: src/java/org/apache/hadoop/hive/common/jsonexplain/tez/Vertex.java: does 
not exist in index
error: src/java/org/apache/hadoop/hive/ql/exec/ExplainTask.java: does not exist 
in index
error: src/java/org/apache/hadoop/hive/ql/hooks/ATSHook.java: does not exist in 
index
error: java/org/apache/hadoop/hive/common/jsonexplain/JsonParser.java: does not 
exist in index
error: java/org/apache/hadoop/hive/common/jsonexplain/tez/Op.java: does not 
exist in index
error: java/org/apache/hadoop/hive/common/jsonexplain/tez/Stage.java: does not 
exist in index
error: java/org/apache/hadoop/hive/common/jsonexplain/tez/TezJsonParser.java: 
does not exist in index
error: java/org/apache/hadoop/hive/common/jsonexpla

[jira] [Commented] (HIVE-15945) Remove debug parameter in HADOOP_OPTS environment when start a new job local.

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431883#comment-16431883
 ] 

Hive QA commented on HIVE-15945:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
45s{color} | {color:red} ql: The patch generated 1 new + 8 unchanged - 0 fixed 
= 9 total (was 8) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 38s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10113/dev-support/hive-personality.sh
 |
| git revision | master / dcd9b59 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10113/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10113/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Remove debug parameter in HADOOP_OPTS environment when start a new job local.
> -
>
> Key: HIVE-15945
> URL: https://issues.apache.org/jira/browse/HIVE-15945
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 2.2.0
>Reporter: wan kun
>Assignee: wan kun
>Priority: Minor
>  Labels: patch
> Fix For: 3.1.0
>
> Attachments: HIVE-15945.patch
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> When hive start a new job in child VM,the debug parameter will be defined 
> twice.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19077) Handle duplicate ptests requests standing in queue at the same time

2018-04-10 Thread Peter Vary (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431930#comment-16431930
 ] 

Peter Vary commented on HIVE-19077:
---

+1

> Handle duplicate ptests requests standing in queue at the same time
> ---
>
> Key: HIVE-19077
> URL: https://issues.apache.org/jira/browse/HIVE-19077
> Project: Hive
>  Issue Type: Improvement
>  Components: Testing Infrastructure
>Reporter: Adam Szita
>Assignee: Adam Szita
>Priority: Major
> Attachments: HIVE-19077.0.patch, HIVE-19077.1.patch
>
>
> I've been keeping on eye on our {{PreCommit-HIVE-Build}} job, and what I 
> noticed that sometimes huge queues can build up, that contain jira's more 
> than once. (Yesterday I've seen a queue of 40, having 31 distinct jiras..)
> Simple scenario is that I upload a patch, it gets queued for ptest (already 
> long queue), and 3 hours later I will update it, re-upload and re-queue. Now 
> the current ptest infra seems to be smart enough to always deal with the 
> latest patch, so what will happen is that the same patch will be tested 2 
> times (with ~3 hours) diff, most probably with same result.
> I propose we do some deduplication - if ptest starts running the request for 
> Jira X, then it can take a look on the current queue, and see if X is there 
> again. If so, it can skip for now, it will be picked up later anyway.
> In practice this means that if you reconsider your patch and update it, your 
> original place in the queue will be gone (like as a penalty for changing it), 
> but overall it saves resources for the whole community.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-15945) Remove debug parameter in HADOOP_OPTS environment when start a new job local.

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-15945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431969#comment-16431969
 ] 

Hive QA commented on HIVE-15945:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12853094/HIVE-15945.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 105 failed/errored test(s), 13652 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=252)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=252)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=246)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=252)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=252)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=96)

[udf_invalid.q,authorization_uri_export.q,default_constraint_complex_default_value.q,druid_datasource2.q,check_constraint_max_length.q,view_update.q,default_partition_name.q,authorization_public_create.q,load_wrong_fileformat_rc_seq.q,default_constraint_invalid_type.q,altern1.q,describe_xpath1.q,drop_view_failure2.q,temp_table_rename.q,invalid_select_column_with_subquery.q,udf_trunc_error1.q,insert_view_failure.q,dbtxnmgr_nodbunlock.q,authorization_show_columns.q,cte_recursion.q,merge_constraint_notnull.q,load_part_nospec.q,clusterbyorderby.q,orc_type_promotion2.q,ctas_noperm_loc.q,udf_min.q,udf_instr_wrong_args_len.q,invalid_create_tbl2.q,part_col_complex_type.q,authorization_drop_db_empty.q,smb_mapjoin_14.q,subquery_scalar_multi_rows.q,alter_partition_coltype_2columns.q,subquery_corr_in_agg.q,insert_overwrite_notnull_constraint.q,authorization_show_grant_otheruser_wtab.q,regex_col_groupby.q,ptf_negative_DuplicateWindowAlias.q,exim_22_export_authfail.q,udf_likeany_wrong1.q,groupby_key.q,ambiguous_col.q,groupby3_multi_distinct.q,authorization_alter_drop_ptn.q,invalid_cast_from_binary_5.q,show_create_table_does_not_exist.q,invalid_select_column.q,exim_20_managed_location_over_existing.q,interval_3.q,authorization_compile.q,join35.q,udf_concat_ws_wrong3.q,create_or_replace_view8.q,create_external_with_notnull_constraint.q,split_sample_out_of_range.q,materialized_view_no_transactional_rewrite.q,authorization_show_grant_otherrole.q,create_with_constraints_duplicate_name.q,invalid_stddev_samp_syntax.q,authorization_view_disable_cbo_7.q,autolocal1.q,avro_non_nullable_union.q,load_orc_negative_part.q,drop_view_failure1.q,columnstats_partlvl_invalid_values_autogather.q,exim_13_nonnative_import.q,alter_table_wrong_regex.q,udf_next_day_error_2.q,authorization_select.q,udf_trunc_error2.q,authorization_view_7.q,udf_format_number_wrong5.q,touch2.q,orc_type_promotion1.q,lateral_view_alias.q,show_tables_bad_db1.q,unset_table_property.q,alter_non_native.q,nvl_mismatch_type.q,load_orc_negative3.q,authorization_create_role_no_admin.q,invalid_distinct1.q,authorization_grant_server.q,orc_type_promotion3_acid.q,hms_using_serde_alter_table_update_columns.q,show_tables_bad1.q,macro_unused_parameter.q,drop_invalid_constraint3.q,drop_partition_filter_failure.q,char_pad_convert_fail3.q,exim_23_import_exist_authfail.q,drop_invalid_constraint4.q,authorization_create_macro1.q,archive1.q,subquery_multiple_cols_in_select.q,change_hive_hdfs_session_path.q,udf_trunc_error3.q,invalid_variance_syntax.q,authorization_truncate_2.q,invalid_avg_syntax.q,invalid_select_column_with_tablename.q,mm_truncate_cols.q,groupby_grouping_sets1.q,druid_location.q,groupby2_multi_distinct.q,authorization_sba_drop_table.q,dynamic_partitions_with_whitelist.q,compare_string_bigint_2.q,udf_greatest_error_2.q,authorization_view_6.q,show_tablestatus.q,duplicate_alias_in_transform_schema.q,create_with_fk_uk_same_tab.q,udtf_not_supported3.q,alter_table_constraint_invalid_fk_col2.q,udtf_not_supported1.q,dbtxnmgr_notableunlock.q,ptf_negative_InvalidValueBoundary.q,alter_table_constraint_duplicate_pk.q,udf_printf_wrong4.q,create_view_failure9.q,udf_elt_wrong_type.q,selectDistinctStarNeg_1.q,invalid_mapjoin1.q,load_stored_as_dirs.q,input1.q,udf_sort_array_wrong1.q,invalid_distinct2.q,invalid_select_fn.q,authorization_role_grant_otherrole.q,archive4.q,load_nonpart_authfail.q,recursive_view.q,authorization_view_disable_cbo_1.q,desc_failure4.q,create_not_acid.q,udf_sort_array_wrong3.q,char_pad_convert_fail0.q,udf_map_values_arg_type.q,alter_view_failure6_2.q,alter_partition_change_col_nonexist.q,update_non_acid_table.q,authorization_view_disable_cbo_5.q,ct_noperm_loc.q,interval_1.q,authorization_show_grant_otheruser_all.q,authorization_view_2.q,show_tables_bad2.q,groupby_rollup2.q,trunc

[jira] [Commented] (HIVE-17084) Turn on hive.stats.fetch.column.stats configuration flag

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431986#comment-16431986
 ] 

Hive QA commented on HIVE-17084:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 56s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10114/dev-support/hive-personality.sh
 |
| git revision | master / dcd9b59 |
| Default Java | 1.8.0_111 |
| modules | C: common U: common |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10114/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Turn on hive.stats.fetch.column.stats configuration flag
> 
>
> Key: HIVE-17084
> URL: https://issues.apache.org/jira/browse/HIVE-17084
> Project: Hive
>  Issue Type: Task
>  Components: Statistics
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-17084.1.patch
>
>
> This flag is off by default and could result in bad plans due to missing 
> column statistics.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18839) Implement incremental rebuild for materialized views (only insert operations in source tables)

2018-04-10 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18839?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-18839:
---
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed to master, branch-3. Thanks [~ashutoshc]!

> Implement incremental rebuild for materialized views (only insert operations 
> in source tables)
> --
>
> Key: HIVE-18839
> URL: https://issues.apache.org/jira/browse/HIVE-18839
> Project: Hive
>  Issue Type: Improvement
>  Components: Materialized views
>Affects Versions: 3.0.0
>Reporter: Jesus Camacho Rodriguez
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: TODOC3.0
> Fix For: 3.0.0
>
> Attachments: HIVE-18839.01.patch, HIVE-18839.02.patch, 
> HIVE-18839.03.patch, HIVE-18839.04.patch, HIVE-18839.patch
>
>
> Implementation will follow current code path for full rebuild. 
> When the MV query plan is retrieved, if the MV contents are outdated because 
> there were insert operations in the source tables, we will introduce a filter 
> with a condition based on stored value of ValidWriteIdLists. For instance, 
> {{WRITE_ID < high_txn_id AND WRITE_ID NOT IN (x, y, ...)}}. Then the 
> rewriting will do the rest of the work by creating a partial rewriting, where 
> the contents of the MV are read as well as the new contents from the source 
> tables.
> This mechanism will not work only for ALTER MV... REBUILD, but also for user 
> queries which will be able to benefit from using outdated MVs to compute part 
> of the needed results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19147) Fix TestTezPerfCliDriver

2018-04-10 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431998#comment-16431998
 ] 

Zoltan Haindrich commented on HIVE-19147:
-

this patch will effectively undo the changes of HIVE-19128 because - those 
results were bad because the cbo was technically in a non-workable condition

> Fix TestTezPerfCliDriver
> 
>
> Key: HIVE-19147
> URL: https://issues.apache.org/jira/browse/HIVE-19147
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> it seems the baked metastore dump misses the CAT_NAME field added by some 
> recent metastore change



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19147) Fix TestTezPerfCliDriver

2018-04-10 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432005#comment-16432005
 ] 

Zoltan Haindrich commented on HIVE-19147:
-

the artifical metastore data became invalid; because the of a new 
column(CAT_NAME) was not handled.

I wanted to employ {{HiveSchemaTool}} to simply upgrade the schema to the most 
recent version (and put a probably longer term fix to this problem source) ; 
but it seems the dump contains only 2 tables... probably a full database dump 
of a metastore schema - would that be better...(or worse?)

> Fix TestTezPerfCliDriver
> 
>
> Key: HIVE-19147
> URL: https://issues.apache.org/jira/browse/HIVE-19147
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> it seems the baked metastore dump misses the CAT_NAME field added by some 
> recent metastore change



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19147) Fix PerfCliDrivers: Tpcds30T missed CAT_NAME change

2018-04-10 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-19147:

Attachment: HIVE-19147.01.patch

> Fix PerfCliDrivers: Tpcds30T missed CAT_NAME change
> ---
>
> Key: HIVE-19147
> URL: https://issues.apache.org/jira/browse/HIVE-19147
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19147.01.patch
>
>
> it seems the baked metastore dump misses the CAT_NAME field added by some 
> recent metastore change



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19147) Fix PerfCliDrivers: Tpcds30T missed CAT_NAME change

2018-04-10 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-19147:

Summary: Fix PerfCliDrivers: Tpcds30T missed CAT_NAME change  (was: Fix 
TestTezPerfCliDriver)

> Fix PerfCliDrivers: Tpcds30T missed CAT_NAME change
> ---
>
> Key: HIVE-19147
> URL: https://issues.apache.org/jira/browse/HIVE-19147
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19147.01.patch
>
>
> it seems the baked metastore dump misses the CAT_NAME field added by some 
> recent metastore change



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19147) Fix PerfCliDrivers: Tpcds30T missed CAT_NAME change

2018-04-10 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432014#comment-16432014
 ] 

Zoltan Haindrich commented on HIVE-19147:
-

I've moved this part out from QTestUtil ; locally earlier I've made an 
experiment in which I've used it from hive-jmh to performance test compilation 
time ; altought that was only an experiment - I will probably continue with it 
later.

I've removed the log&go exception handling during the setup of the fake 
metastore; since those could also cause invalid perf results to appear.

I hoped that the reviewboard will pick up that I've moved the method from 
QTestUtil to that new file...unfortunately no...
but rb is at: https://reviews.apache.org/r/66525/

> Fix PerfCliDrivers: Tpcds30T missed CAT_NAME change
> ---
>
> Key: HIVE-19147
> URL: https://issues.apache.org/jira/browse/HIVE-19147
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19147.01.patch
>
>
> it seems the baked metastore dump misses the CAT_NAME field added by some 
> recent metastore change



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19147) Fix PerfCliDrivers: Tpcds30T missed CAT_NAME change

2018-04-10 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-19147:

Status: Patch Available  (was: Open)

> Fix PerfCliDrivers: Tpcds30T missed CAT_NAME change
> ---
>
> Key: HIVE-19147
> URL: https://issues.apache.org/jira/browse/HIVE-19147
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19147.01.patch
>
>
> it seems the baked metastore dump misses the CAT_NAME field added by some 
> recent metastore change



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18988) Support bootstrap replication of ACID tables

2018-04-10 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18988:

Status: Open  (was: Patch Available)

> Support bootstrap replication of ACID tables
> 
>
> Key: HIVE-18988
> URL: https://issues.apache.org/jira/browse/HIVE-18988
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Fix For: 3.1.0
>
> Attachments: HIVE-18988.01.patch, HIVE-18988.02.patch, 
> HIVE-18988.03.patch
>
>
> Bootstrapping of ACID tables, need special handling to replicate a stable 
> state of data.
>  - If ACID feature enables, then perform bootstrap dump for ACID tables with 
> in read txn.
>  -> Dump table/partition metadata.
>  -> Get the list of valid data files for a table using same logic as read txn 
> do.
>  -> Dump latest ValidWriteIdList as per current read txn.
>  - Find the valid last replication state such that it points to event ID of 
> open_txn event of oldest on-going txn.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18988) Support bootstrap replication of ACID tables

2018-04-10 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18988:

Attachment: HIVE-18988.03.patch

> Support bootstrap replication of ACID tables
> 
>
> Key: HIVE-18988
> URL: https://issues.apache.org/jira/browse/HIVE-18988
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Fix For: 3.1.0
>
> Attachments: HIVE-18988.01.patch, HIVE-18988.02.patch, 
> HIVE-18988.03.patch
>
>
> Bootstrapping of ACID tables, need special handling to replicate a stable 
> state of data.
>  - If ACID feature enables, then perform bootstrap dump for ACID tables with 
> in read txn.
>  -> Dump table/partition metadata.
>  -> Get the list of valid data files for a table using same logic as read txn 
> do.
>  -> Dump latest ValidWriteIdList as per current read txn.
>  - Find the valid last replication state such that it points to event ID of 
> open_txn event of oldest on-going txn.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18988) Support bootstrap replication of ACID tables

2018-04-10 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-18988:

Status: Patch Available  (was: Open)

Added 03.patch after rebasing with master.

> Support bootstrap replication of ACID tables
> 
>
> Key: HIVE-18988
> URL: https://issues.apache.org/jira/browse/HIVE-18988
> Project: Hive
>  Issue Type: Sub-task
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: ACID, DR, pull-request-available, replication
> Fix For: 3.1.0
>
> Attachments: HIVE-18988.01.patch, HIVE-18988.02.patch, 
> HIVE-18988.03.patch
>
>
> Bootstrapping of ACID tables, need special handling to replicate a stable 
> state of data.
>  - If ACID feature enables, then perform bootstrap dump for ACID tables with 
> in read txn.
>  -> Dump table/partition metadata.
>  -> Get the list of valid data files for a table using same logic as read txn 
> do.
>  -> Dump latest ValidWriteIdList as per current read txn.
>  - Find the valid last replication state such that it points to event ID of 
> open_txn event of oldest on-going txn.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-18946) Fix columnstats merge NPE

2018-04-10 Thread Laszlo Bodor (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16431841#comment-16431841
 ] 

Laszlo Bodor edited comment on HIVE-18946 at 4/10/18 10:53 AM:
---

[~ashutoshc] : i've already created HIVE-19131 for that. The logic was already 
bad in that function, so the compareTo fix could introduce other differences 
(q.outs, not sure).


was (Author: abstractdog):
[~ashutoshc] : i've already created HIVE-19131 for that. The logic was already 
bad in that function, so the fix could introduce other differences (q.outs, not 
sure).

> Fix columnstats merge NPE
> -
>
> Key: HIVE-18946
> URL: https://issues.apache.org/jira/browse/HIVE-18946
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Laszlo Bodor
>Priority: Major
> Attachments: HIVE-18946.01.patch, HIVE-18946.02.patch
>
>
> after analyzing an empty table may lead to an NPE when inserting into it...
> {code}
> 2018-03-13T06:54:22,503 ERROR [df3fb505-e0bc-4595-a874-b735dab8dff6 main] 
> metastore.RetryingHMSHandler: java.lang.NullPointerException
> at 
> org.apache.hadoop.hive.metastore.api.Decimal.compareTo(Decimal.java:318)
> at 
> org.apache.hadoop.hive.metastore.columnstats.merge.DecimalColumnStatsMerger.merge(DecimalColumnStatsMerger.java:35)
> at 
> org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.mergeColStats(MetaStoreUtils.java:778)
> at 
> org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.set_aggr_stats_for(HiveMetaStore.java:6934)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147)
> at 
> org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108)
> at com.sun.proxy.$Proxy55.set_aggr_stats_for(Unknown Source)
> [...]
> {code}
> reproduce
> {code}
> set hive.stats.autogather=true;
> set hive.explain.user=true;
> drop table if exists testdeci2;
> create table testdeci2(
> id int,
> amount decimal(10,3),
> sales_tax decimal(10,3),
> item string)
> stored as orc location '/tmp/testdeci2'
> TBLPROPERTIES ("transactional"="false")
> ;
> analyze table testdeci2 compute statistics for columns;
> insert into table testdeci2 
> values(1,12.123,12345.123,'desk1'),(2,123.123,1234.123,'desk2');
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17025) HPL/SQL: hplsql.conn.convert.hiveconn seems to default to false, contrary to docs

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432066#comment-16432066
 ] 

Hive QA commented on HIVE-17025:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  9s{color} 
| {color:red} 
/data/hiveptest/logs/PreCommit-HIVE-Build-10115/patches/PreCommit-HIVE-Build-10115.patch
 does not apply to master. Rebase required? Wrong Branch? See 
http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10115/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HPL/SQL: hplsql.conn.convert.hiveconn seems to default to false, contrary to 
> docs
> -
>
> Key: HIVE-17025
> URL: https://issues.apache.org/jira/browse/HIVE-17025
> Project: Hive
>  Issue Type: Bug
>  Components: hpl/sql
>Reporter: Carter Shanklin
>Assignee: Dmitry Tolpeko
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-17025.1.patch
>
>
> This bug is part of a series of issues and surprising behavior I encountered 
> writing a reporting script that would aggregate values and give rows 
> different classifications based on an the aggregate. Addressing some or all 
> of these issues would make HPL/SQL more accessible to newcomers.
> Example from the docs is as follows:
> CREATE TABLE dept (
>   deptno NUMBER(2,0),
>   dname  NUMBER(14),
>   locVARCHAR2(13),
>   CONSTRAINT pk_dept PRIMARY KEY (deptno)
> );
> With this config:
> 
>   
> hplsql.conn.default
> hiveconn
>   
>   
> hplsql.conn.hiveconn
> org.apache.hive.jdbc.HiveDriver;jdbc:hive2://
>   
> 
> I get this error:
> Caused by: java.lang.RuntimeException: 
> org.apache.hadoop.hive.ql.parse.ParseException:line 2:9 cannot recognize 
> input near 'NUMBER' '(' '2' in column type
> With this config:
> 
>   
> hplsql.conn.default
> hiveconn
>   
>   
> hplsql.conn.hiveconn
> org.apache.hive.jdbc.HiveDriver;jdbc:hive2://
>   
>   
> hplsql.conn.convert.hiveconn
> true
>   
> 
> the example works.
> Version = 3.0.0-SNAPSHOT r71f52d8ad512904b3f2c4f04fe39a33f2834f1f2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HIVE-19130) NPE is thrown when REPL LOAD applied drop partition event.

2018-04-10 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-19130 started by Sankar Hariappan.
---
> NPE is thrown when REPL LOAD applied drop partition event.
> --
>
> Key: HIVE-19130
> URL: https://issues.apache.org/jira/browse/HIVE-19130
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication
> Fix For: 3.1.0
>
>
> During incremental replication, if we split the events batch as follows, then 
> the REPL LOAD on second batch throws NPE.
> Batch-1: CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> DROP_PARTITION (t1.p1)
> Batch-2: DROP_TABLE(t1) ->  CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> 
> DROP_PARTITION (t1.p1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-12192) Hive should carry out timestamp computations in UTC

2018-04-10 Thread Jesus Camacho Rodriguez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jesus Camacho Rodriguez updated HIVE-12192:
---
Attachment: HIVE-12192.02.patch

> Hive should carry out timestamp computations in UTC
> ---
>
> Key: HIVE-12192
> URL: https://issues.apache.org/jira/browse/HIVE-12192
> Project: Hive
>  Issue Type: Sub-task
>  Components: Hive
>Reporter: Ryan Blue
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: timestamp
> Attachments: HIVE-12192.01.patch, HIVE-12192.02.patch, 
> HIVE-12192.patch
>
>
> Hive currently uses the "local" time of a java.sql.Timestamp to represent the 
> SQL data type TIMESTAMP WITHOUT TIME ZONE. The purpose is to be able to use 
> {{Timestamp#getYear()}} and similar methods to implement SQL functions like 
> {{year}}.
> When the SQL session's time zone is a DST zone, such as America/Los_Angeles 
> that alternates between PST and PDT, there are times that cannot be 
> represented because the effective zone skips them.
> {code}
> hive> select TIMESTAMP '2015-03-08 02:10:00.101';
> 2015-03-08 03:10:00.101
> {code}
> Using UTC instead of the SQL session time zone as the underlying zone for a 
> java.sql.Timestamp avoids this bug, while still returning correct values for 
> {{getYear}} etc. Using UTC as the convenience representation (timestamp 
> without time zone has no real zone) would make timestamp calculations more 
> consistent and avoid similar problems in the future.
> Notably, this would break the {{unix_timestamp}} UDF that specifies the 
> result is with respect to ["the default timezone and default 
> locale"|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions].
>  That function would need to be updated to use the 
> {{System.getProperty("user.timezone")}} zone.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19150) Add rule to push in condition condition into a related disjunctive expression

2018-04-10 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19150?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich reassigned HIVE-19150:
---


> Add rule to push in condition condition into a related disjunctive expression
> -
>
> Key: HIVE-19150
> URL: https://issues.apache.org/jira/browse/HIVE-19150
> Project: Hive
>  Issue Type: Improvement
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
>
> simplify expressions like: {a = 1 && (a=1 || a=2)} to {a=1}
> conditions to apply will be:
> * in an AND condition exists a comparision(c) and an OR (o)
> * o and c only reference 1 variable
> HIVE-19097 for more info



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19097) related equals and in operators may cause inaccurate stats estimations

2018-04-10 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-19097:

Status: Open  (was: Patch Available)

> related equals and in operators may cause inaccurate stats estimations
> --
>
> Key: HIVE-19097
> URL: https://issues.apache.org/jira/browse/HIVE-19097
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19097.01.patch, HIVE-19097.partial.patch
>
>
> tpcds#74 is optimized in a way that for date_dim the condition contains IN 
> and = for the same column
> {code:java}
> | Map Operator Tree: |
> | TableScan  |
> |   alias: date_dim  |
> |   filterExpr: (((d_year) IN (2001, 2002) and (d_year = 
> 2002) and d_date_sk is not null) or ((d_year) IN (2001, 2002) and (d_year = 
> 2001) and d_date_sk is not null)) (type: boolean) |
> |   Statistics: Num rows: 73049 Data size: 876588 Basic 
> stats: COMPLETE Column stats: COMPLETE |
> |   Filter Operator  |
> | predicate: ((d_year) IN (2001, 2002) and (d_year = 
> 2002) and d_date_sk is not null) (type: boolean) |
> | Statistics: Num rows: 4 Data size: 48 Basic stats: 
> COMPLETE Column stats: COMPLETE |
> {code}
> the "real" row count will be 365
> for separate {{IN}} and {{=}} the estimation is very good; but if both are 
> present it becomes (very) underestimated.
> {code:java}
> set hive.query.results.cache.enabled=false;
> drop table if exists t1;
> drop table if exists t8;
> create table t1 (a integer,b integer);
> create table t8 like t1;
> insert into t1 values (1,1),(2,2),(3,3),(4,4),(5,5);
> insert into t8
> select * from t1 union all select * from t1 union all select * from t1 union 
> all select * from t1 union all
> select * from t1 union all select * from t1 union all select * from t1 union 
> all select * from t1
> ;
> analyze table t1 compute statistics for columns;
> analyze table t8 compute statistics for columns;
> explain analyze select sum(a) from t8 where b in (2,3) group by b;
> explain analyze select sum(a) from t8 where b=2 group by b;
> explain analyze select sum(a) from t1 where b in (2,3) and b=2 group by b;
> explain analyze select sum(a) from t8 where b in (2,3) and b=2 group by b;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19097) related equals and in operators may cause inaccurate stats estimations

2018-04-10 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19097?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-19097:

Attachment: HIVE-19097.partial.patch

> related equals and in operators may cause inaccurate stats estimations
> --
>
> Key: HIVE-19097
> URL: https://issues.apache.org/jira/browse/HIVE-19097
> Project: Hive
>  Issue Type: Bug
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19097.01.patch, HIVE-19097.partial.patch
>
>
> tpcds#74 is optimized in a way that for date_dim the condition contains IN 
> and = for the same column
> {code:java}
> | Map Operator Tree: |
> | TableScan  |
> |   alias: date_dim  |
> |   filterExpr: (((d_year) IN (2001, 2002) and (d_year = 
> 2002) and d_date_sk is not null) or ((d_year) IN (2001, 2002) and (d_year = 
> 2001) and d_date_sk is not null)) (type: boolean) |
> |   Statistics: Num rows: 73049 Data size: 876588 Basic 
> stats: COMPLETE Column stats: COMPLETE |
> |   Filter Operator  |
> | predicate: ((d_year) IN (2001, 2002) and (d_year = 
> 2002) and d_date_sk is not null) (type: boolean) |
> | Statistics: Num rows: 4 Data size: 48 Basic stats: 
> COMPLETE Column stats: COMPLETE |
> {code}
> the "real" row count will be 365
> for separate {{IN}} and {{=}} the estimation is very good; but if both are 
> present it becomes (very) underestimated.
> {code:java}
> set hive.query.results.cache.enabled=false;
> drop table if exists t1;
> drop table if exists t8;
> create table t1 (a integer,b integer);
> create table t8 like t1;
> insert into t1 values (1,1),(2,2),(3,3),(4,4),(5,5);
> insert into t8
> select * from t1 union all select * from t1 union all select * from t1 union 
> all select * from t1 union all
> select * from t1 union all select * from t1 union all select * from t1 union 
> all select * from t1
> ;
> analyze table t1 compute statistics for columns;
> analyze table t8 compute statistics for columns;
> explain analyze select sum(a) from t8 where b in (2,3) group by b;
> explain analyze select sum(a) from t8 where b=2 group by b;
> explain analyze select sum(a) from t1 where b in (2,3) and b=2 group by b;
> explain analyze select sum(a) from t8 where b in (2,3) and b=2 group by b;
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work stopped] (HIVE-19130) NPE is thrown when REPL LOAD applied drop partition event.

2018-04-10 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HIVE-19130 stopped by Sankar Hariappan.
---
> NPE is thrown when REPL LOAD applied drop partition event.
> --
>
> Key: HIVE-19130
> URL: https://issues.apache.org/jira/browse/HIVE-19130
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication
> Fix For: 3.1.0
>
>
> During incremental replication, if we split the events batch as follows, then 
> the REPL LOAD on second batch throws NPE.
> Batch-1: CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> DROP_PARTITION (t1.p1)
> Batch-2: DROP_TABLE(t1) ->  CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> 
> DROP_PARTITION (t1.p1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19130) NPE is thrown when REPL LOAD applied drop partition event.

2018-04-10 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19130:

Attachment: HIVE-19130.01.patch

> NPE is thrown when REPL LOAD applied drop partition event.
> --
>
> Key: HIVE-19130
> URL: https://issues.apache.org/jira/browse/HIVE-19130
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication
> Fix For: 3.1.0
>
> Attachments: HIVE-19130.01.patch
>
>
> During incremental replication, if we split the events batch as follows, then 
> the REPL LOAD on second batch throws NPE.
> Batch-1: CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> DROP_PARTITION (t1.p1)
> Batch-2: DROP_TABLE(t1) ->  CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> 
> DROP_PARTITION (t1.p1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19130) NPE is thrown when REPL LOAD applied drop partition event.

2018-04-10 Thread Sankar Hariappan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sankar Hariappan updated HIVE-19130:

Status: Patch Available  (was: Open)

Attached 01.patch with the big fixed as
 * Event type is passed to ImportSemanticAnalyzer.
 * If event type is create_table and if table exists, then we will add Drop 
table task before creating the table.

Request [~thejas] to please review it!

> NPE is thrown when REPL LOAD applied drop partition event.
> --
>
> Key: HIVE-19130
> URL: https://issues.apache.org/jira/browse/HIVE-19130
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication
> Fix For: 3.1.0
>
> Attachments: HIVE-19130.01.patch
>
>
> During incremental replication, if we split the events batch as follows, then 
> the REPL LOAD on second batch throws NPE.
> Batch-1: CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> DROP_PARTITION (t1.p1)
> Batch-2: DROP_TABLE(t1) ->  CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> 
> DROP_PARTITION (t1.p1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19009) Retain and use runtime statistics thru out a session

2018-04-10 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-19009:

Attachment: HIVE-19009.02.patch

> Retain and use runtime statistics thru out a session
> 
>
> Key: HIVE-19009
> URL: https://issues.apache.org/jira/browse/HIVE-19009
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19009.01.patch, HIVE-19009.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-19130) NPE is thrown when REPL LOAD applied drop partition event.

2018-04-10 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432174#comment-16432174
 ] 

Sankar Hariappan edited comment on HIVE-19130 at 4/10/18 12:23 PM:
---

Attached 01.patch with the big fixed as
 * Event type is passed to ImportSemanticAnalyzer.
 * If event type is create_table and if table exists, then we will add Drop 
table task before creating the table.

Request [~thejas], [~daijy] to please review it!


was (Author: sankarh):
Attached 01.patch with the big fixed as
 * Event type is passed to ImportSemanticAnalyzer.
 * If event type is create_table and if table exists, then we will add Drop 
table task before creating the table.

Request [~thejas] to please review it!

> NPE is thrown when REPL LOAD applied drop partition event.
> --
>
> Key: HIVE-19130
> URL: https://issues.apache.org/jira/browse/HIVE-19130
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication
> Fix For: 3.1.0
>
> Attachments: HIVE-19130.01.patch
>
>
> During incremental replication, if we split the events batch as follows, then 
> the REPL LOAD on second batch throws NPE.
> Batch-1: CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> DROP_PARTITION (t1.p1)
> Batch-2: DROP_TABLE(t1) ->  CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> 
> DROP_PARTITION (t1.p1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18831) Differentiate errors that are thrown by Spark tasks

2018-04-10 Thread Rui Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432179#comment-16432179
 ] 

Rui Li commented on HIVE-18831:
---

+1 pending test

> Differentiate errors that are thrown by Spark tasks
> ---
>
> Key: HIVE-18831
> URL: https://issues.apache.org/jira/browse/HIVE-18831
> Project: Hive
>  Issue Type: Sub-task
>  Components: Spark
>Reporter: Sahil Takiar
>Assignee: Sahil Takiar
>Priority: Major
> Attachments: HIVE-18831.1.patch, HIVE-18831.2.patch, 
> HIVE-18831.3.patch, HIVE-18831.4.patch, HIVE-18831.6.patch, 
> HIVE-18831.7.patch, HIVE-18831.8.WIP.patch, HIVE-18831.9.patch, 
> HIVE-18831.90.patch, HIVE-18831.91.patch
>
>
> We propagate exceptions from Spark task failures to the client well, but we 
> don't differentiate between errors from HS2 / RSC vs. errors thrown by 
> individual tasks.
> Main motivation is that when the client sees a propagated Spark exception its 
> difficult to know what part of the excution threw the exception.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HIVE-19130) NPE is thrown when REPL LOAD applied drop partition event.

2018-04-10 Thread Sankar Hariappan (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432174#comment-16432174
 ] 

Sankar Hariappan edited comment on HIVE-19130 at 4/10/18 12:24 PM:
---

Attached 01.patch with the bug fixed as
 * Event type is passed to ImportSemanticAnalyzer.
 * If event type is create_table and if table exists, then we will add Drop 
table task before creating the table.

Request [~thejas], [~daijy] to please review it!


was (Author: sankarh):
Attached 01.patch with the big fixed as
 * Event type is passed to ImportSemanticAnalyzer.
 * If event type is create_table and if table exists, then we will add Drop 
table task before creating the table.

Request [~thejas], [~daijy] to please review it!

> NPE is thrown when REPL LOAD applied drop partition event.
> --
>
> Key: HIVE-19130
> URL: https://issues.apache.org/jira/browse/HIVE-19130
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication
> Fix For: 3.1.0
>
> Attachments: HIVE-19130.01.patch
>
>
> During incremental replication, if we split the events batch as follows, then 
> the REPL LOAD on second batch throws NPE.
> Batch-1: CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> DROP_PARTITION (t1.p1)
> Batch-2: DROP_TABLE(t1) ->  CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> 
> DROP_PARTITION (t1.p1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19009) Retain and use runtime statistics thru out a session

2018-04-10 Thread Zoltan Haindrich (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432182#comment-16432182
 ] 

Zoltan Haindrich commented on HIVE-19009:
-

sure, seems like I've forgot to add the link to the reviewboard earlier: 
https://reviews.apache.org/r/66402/

* reoptimization also works if the query is vectorized
* collection rules were too strict; there are cases when in the same query the 
same subtree is calculated multiple times
* added session level storage

> Retain and use runtime statistics thru out a session
> 
>
> Key: HIVE-19009
> URL: https://issues.apache.org/jira/browse/HIVE-19009
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19009.01.patch, HIVE-19009.02.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-17025) HPL/SQL: hplsql.conn.convert.hiveconn seems to default to false, contrary to docs

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432188#comment-16432188
 ] 

Hive QA commented on HIVE-17025:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12884274/HIVE-17025.1.patch

{color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 126 failed/errored test(s), 14041 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=253)
TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed 
out) (batchId=217)
TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestTezPerfCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] 
(batchId=54)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez_empty]
 (batchId=159)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint]
 (batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[groupby_groupingset_bug]
 (batchId=174)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata]
 (batchId=169)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] 
(batchId=168)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] 
(batchId=163)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] 
(batchId=171)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main]
 (batchId=160)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[update_access_time_non_current_db]
 (batchId=172)
org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_dynamic_semijoin_reduction]
 (batchId=155)
org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] 
(batchId=105)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_notnull_constraint_violation]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[default_constraint_invalid_default_value_type]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_acid_notnull]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_notnull_constraint]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_multi_into_notnull]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_overwrite_notnull_constraint]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[parquet_alter_part_table_drop_columns]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_bucketmapjoin]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_mapjoin_14] 
(batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[sortmerge_mapjoin_mismatch_1]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_2]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_1]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_publisher_error_2]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_corr_in_agg]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_in_implicit_gby]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[subquery_notin_implicit_gby]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_bucketed_column]
 (batchId=96)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_column_list_bucketing]
 (batchId=95)
org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[truncate_column_seqfile]
 (batchId=96)
org.apache.hadoop.hive.cl

[jira] [Commented] (HIVE-14246) Tez: disable auto-reducer parallelism when CUSTOM_EDGE is in place

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14246?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432192#comment-16432192
 ] 

Hive QA commented on HIVE-14246:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12818108/HIVE-14246.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10116/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10116/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10116/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-10 12:33:58.931
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10116/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 12:33:58.934
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at be42009 HIVE-18839: Implement incremental rebuild for 
materialized views (only insert operations in source tables) (Jesus Camacho 
Rodriguez, reviewed by Ashutosh Chauhan)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at be42009 HIVE-18839: Implement incremental rebuild for 
materialized views (only insert operations in source tables) (Jesus Camacho 
Rodriguez, reviewed by Ashutosh Chauhan)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 12:34:02.299
+ rm -rf ../yetus_PreCommit-HIVE-Build-10116
+ mkdir ../yetus_PreCommit-HIVE-Build-10116
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10116
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10116/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java:140
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java' 
with conflicts.
Going to apply patch with: git apply -p0
/data/hiveptest/working/scratch/build.patch:10: trailing whitespace.
if (reduceWork.isAutoReduceParallelism() 
error: patch failed: 
ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java:140
Falling back to three-way merge...
Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java' 
with conflicts.
U ql/src/java/org/apache/hadoop/hive/ql/parse/GenTezUtils.java
warning: 1 line adds whitespace errors.
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12818108 - PreCommit-HIVE-Build

> Tez: disable auto-reducer parallelism when CUSTOM_EDGE is in place
> --
>
> Key: HIVE-14246
> URL: https://issues.apache.org/jira/browse/HIVE-14246
> Project: Hive
>  Issue Type: Bug
>  Components: Tez
>Affects Versions: 2.2.0
>Reporter: Gopal V
>Assignee: Gopal V
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HIVE-14246.1.patch
>
>
> The CUSTOM_SIMPLE_EDGE impl has differences between the size constraints of 
> either edge which cannot be represented by the ShuffleVertexManager presently.
> Reducing the width based on the hashtable build side vs the streaming probe 
> side have different consequences since there is no order of runtime between 
> them.
> Until the two parent vertices of the shuffle hash-join are related, this 
> feature causes massive inconsistency of performance across runs.
> For inner & semi joins, the hashtable side should hav

[jira] [Commented] (HIVE-16674) Hive metastore JVM dumps core

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16674?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432195#comment-16432195
 ] 

Hive QA commented on HIVE-16674:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12869451/HIVE-16674.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10117/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10117/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10117/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-10 12:36:57.794
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10117/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 12:36:57.797
+ cd apache-github-source-source
+ git fetch origin
+ git reset --hard HEAD
HEAD is now at be42009 HIVE-18839: Implement incremental rebuild for 
materialized views (only insert operations in source tables) (Jesus Camacho 
Rodriguez, reviewed by Ashutosh Chauhan)
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at be42009 HIVE-18839: Implement incremental rebuild for 
materialized views (only insert operations in source tables) (Jesus Camacho 
Rodriguez, reviewed by Ashutosh Chauhan)
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 12:36:58.310
+ rm -rf ../yetus_PreCommit-HIVE-Build-10117
+ mkdir ../yetus_PreCommit-HIVE-Build-10117
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10117
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10117/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java: 
does not exist in index
error: a/metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: 
does not exist in index
error: 
metastore/src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java: 
does not exist in index
error: metastore/src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: 
does not exist in index
error: src/java/org/apache/hadoop/hive/metastore/MetaStoreDirectSql.java: does 
not exist in index
error: src/java/org/apache/hadoop/hive/metastore/ObjectStore.java: does not 
exist in index
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12869451 - PreCommit-HIVE-Build

> Hive metastore JVM dumps core
> -
>
> Key: HIVE-16674
> URL: https://issues.apache.org/jira/browse/HIVE-16674
> Project: Hive
>  Issue Type: Bug
>Affects Versions: 1.2.1
> Environment: Hive-1.2.1
> Kerberos enabled cluster
>Reporter: Vlad Gudikov
>Assignee: Vlad Gudikov
>Priority: Blocker
> Fix For: 1.2.1, 3.1.0
>
> Attachments: HIVE-16674.1.patch, HIVE-16674.patch
>
>
> While trying to run a Hive query on 24 partitions executed on an external 
> table with large amount of partitions (4K+). I get an error
> {code}
>  - org.apache.thrift.transport.TSaslTransport$SaslParticipant.wrap(byte[], 
> int, int) @bci=27, line=568 (Compiled frame)
>  - org.apache.thrift.transport.TSaslTransport.flush() @bci=52, line=492 
> (Compiled frame)
>  - org.apache.thrift.transport.TSaslServerTransport.flush() @bci=1, line=41 
> (Compiled frame)
>  - org.apache.thrift.ProcessFunction.process(int, 
> org.apache.thrift.protocol.TPro

[jira] [Updated] (HIVE-19104) When test MetaStore is started with retry the instances should be independent

2018-04-10 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-19104:
--
Attachment: HIVE-19104.3.patch

> When test MetaStore is started with retry the instances should be independent
> -
>
> Key: HIVE-19104
> URL: https://issues.apache.org/jira/browse/HIVE-19104
> Project: Hive
>  Issue Type: Improvement
>Reporter: Peter Vary
>Assignee: Peter Vary
>Priority: Major
> Attachments: HIVE-19104.2.patch, HIVE-19104.3.patch, HIVE-19104.patch
>
>
> When multiple MetaStore instances are started with 
> {{MetaStoreTestUtils.startMetaStoreWithRetry}} currently they use the same 
> JDBC url, and warehouse directory. This can cause problem in the tests



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-13697) ListBucketing feature does not support uppercase string.

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432216#comment-16432216
 ] 

Hive QA commented on HIVE-13697:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
23s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10118/dev-support/hive-personality.sh
 |
| git revision | master / be42009 |
| Default Java | 1.8.0_111 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10118/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> ListBucketing feature does not support uppercase string.
> 
>
> Key: HIVE-13697
> URL: https://issues.apache.org/jira/browse/HIVE-13697
> Project: Hive
>  Issue Type: Bug
>  Components: Database/Schema
>Affects Versions: 1.2.1
> Environment: 1.2.1
>Reporter: Hao Zhu
>Assignee: Oleksiy Sayankin
>Priority: Critical
> Fix For: 3.1.0
>
> Attachments: HIVE-13697.1.patch
>
>
> This is the feature:
> https://cwiki.apache.org/confluence/display/Hive/ListBucketing
> 1. Good example:
> {code}
> CREATE TABLE testskew (id INT, a STRING)
> SKEWED BY (a) ON ('abc', 'xyz') STORED AS DIRECTORIES;
> set hive.mapred.supports.subdirectories=true;
> set mapred.input.dir.recursive=true;
>  INSERT OVERWRITE TABLE testskew 
>  SELECT 123,'abc' FROM dual
>  union all
>  SELECT 123,'xyz' FROM dual
>  union all
>  SELECT 123,'others' FROM dual;
> {code}
> {code}
> # hadoop fs -ls /user/hive/warehouse/testskew
> Found 3 items
> drwxrwxrwx   - mapr mapr  1 2016-05-05 14:56
> /user/hive/warehouse/testskew/HIVE_DEFAULT_LIST_BUCKETING_DIR_NAME
> drwxrwxrwx   - mapr mapr  1 2016-05-05 14:56
> /user/hive/warehouse/testskew/a=abc
> drwxrwxrwx   - mapr mapr  1 2016-05-05 14:56
> /user/hive/warehouse/testskew/a=xyz
> {code}
> This is good, because both "abc" and "xyz" directories got created.
> 2. Bad example -- This is the issue
> {code}
> CREATE TABLE testskew2 (id INT, a STRING)
> SKEWED BY (a) ON ('aus', 'US') STORED AS DIRECTORIES;
> set hive.mapred.supports.subdirectories=true;
> set mapred.input.dir.recursive=true;
>  INSERT OVERWRITE TABLE testskew2 
>  SELECT 123, 'aus' FROM dual
>  union all
>  SELECT 123, 'US' FROM dual
>  union all
>  SELECT 123, 'others' FRO

[jira] [Commented] (HIVE-17502) Reuse of default session should not throw an exception in LLAP w/ Tez

2018-04-10 Thread Thai Bui (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-17502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432246#comment-16432246
 ] 

Thai Bui commented on HIVE-17502:
-

Hi [~vgarg], I would definitely want to get this patch onto 3.0.0 if possible. 
At Bazaarvoice, we are currently maintain a separate branch to apply this patch 
and pushing this upstream would remove that effort, we are also currently 
testing using 3.0 in production since early this year (with Hadoop 3.0 but soon 
to upgrade to 3.1). If you guys need help testing the branch, we are that the 
perfect stage for that.

Please let me know what I need to do to get this approved & merged to 3.0.0. 
Thanks!

cc: [~thejas]

> Reuse of default session should not throw an exception in LLAP w/ Tez
> -
>
> Key: HIVE-17502
> URL: https://issues.apache.org/jira/browse/HIVE-17502
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Tez
>Affects Versions: 2.1.1, 2.2.0
> Environment: HDP 2.6.1.0-129, Hue 4
>Reporter: Thai Bui
>Assignee: Thai Bui
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-17502.patch
>
>
> Hive2 w/ LLAP on Tez doesn't allow a currently used, default session to be 
> skipped mostly because of this line 
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L365.
> However, some clients such as Hue 4, allow multiple sessions to be used per 
> user. Under this configuration, a Thrift client will send a request to either 
> reuse or open a new session. The reuse request could include the session id 
> of a currently used snippet being executed in Hue, this causes HS2 to throw 
> an exception:
> {noformat}
> 2017-09-10T17:51:36,548 INFO  [Thread-89]: tez.TezSessionPoolManager 
> (TezSessionPoolManager.java:canWorkWithSameSession(512)) - The current user: 
> hive, session user: hive
> 2017-09-10T17:51:36,549 ERROR [Thread-89]: exec.Task 
> (TezTask.java:execute(232)) - Failed to execute tez graph.
> org.apache.hadoop.hive.ql.metadata.HiveException: The pool session 
> sessionId=5b61a578-6336-41c5-860d-9838166f97fe, queueName=llap, user=hive, 
> doAs=false, isOpen=true, isDefault=true, expires in 591015330ms should have 
> been returned to the pool
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.canWorkWithSameSession(TezSessionPoolManager.java:534)
>  ~[hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.getSession(TezSessionPoolManager.java:544)
>  ~[hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:147) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:79) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
> {noformat}
> Note that every query is issued as a single 'hive' user to share the LLAP 
> daemon pool, a set of pre-determined number of AMs is initialized at setup 
> time. Thus, HS2 should allow new sessions from a Thrift client to be used out 
> of the pool, or an existing session to be skipped and an unused session from 
> the pool to be returned. The logic to throw an exception in the  
> `canWorkWithSameSession` doesn't make sense to me.
> I have a solution to fix this issue in my local branch at 
> https://github.com/thaibui/hive/commit/078a521b9d0906fe6c0323b63e567f6eee2f3a70.
>  When applied, the log will become like so
> {noformat}
> 2017-09-10T09:15:33,578 INFO  [Thread-239]: tez.TezSessionPoolManager 
> (TezSessionPoolManager.java:canWorkWithSameSession(533)) - Skipping default 
> session sessionId=6638b1da-0f8a-405e-85f0-9586f484e6de, queueName=llap, 
> user=hive, doAs=false, isOpen=true, isDefault=true, expires in 591868732ms 
> since it is being used.
> {noformat}
> A test case is provided in my branch to demonstrate how it works. If possible 
> I would like this patch to be applied to version 2.1, 2.2 and master. Since 
> we are using 2.1 LLAP in production with Hue 4, this patch is critical to our 
> success.
> Alternatively, if this patch is too broad in scope, I propose adding an 
> option to allow "skipping of currently used default sessions". With this new 
> option default to "false", existing behavior won't change unless the option 
> is turned on.
> I will prepare an official path if this change to master &/ the other 
> branches is acceptable. I'm not an 

[jira] [Updated] (HIVE-19130) NPE is thrown when REPL LOAD applied drop partition event.

2018-04-10 Thread ASF GitHub Bot (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HIVE-19130:
--
Labels: DR Replication pull-request-available  (was: DR Replication)

> NPE is thrown when REPL LOAD applied drop partition event.
> --
>
> Key: HIVE-19130
> URL: https://issues.apache.org/jira/browse/HIVE-19130
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Fix For: 3.1.0
>
> Attachments: HIVE-19130.01.patch
>
>
> During incremental replication, if we split the events batch as follows, then 
> the REPL LOAD on second batch throws NPE.
> Batch-1: CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> DROP_PARTITION (t1.p1)
> Batch-2: DROP_TABLE(t1) ->  CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> 
> DROP_PARTITION (t1.p1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19130) NPE is thrown when REPL LOAD applied drop partition event.

2018-04-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432251#comment-16432251
 ] 

ASF GitHub Bot commented on HIVE-19130:
---

GitHub user sankarh opened a pull request:

https://github.com/apache/hive/pull/332

HIVE-19130: NPE is thrown when REPL LOAD applied drop partition event.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/sankarh/hive HIVE-19130

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hive/pull/332.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #332






> NPE is thrown when REPL LOAD applied drop partition event.
> --
>
> Key: HIVE-19130
> URL: https://issues.apache.org/jira/browse/HIVE-19130
> Project: Hive
>  Issue Type: Bug
>  Components: HiveServer2, repl
>Affects Versions: 3.0.0
>Reporter: Sankar Hariappan
>Assignee: Sankar Hariappan
>Priority: Major
>  Labels: DR, Replication, pull-request-available
> Fix For: 3.1.0
>
> Attachments: HIVE-19130.01.patch
>
>
> During incremental replication, if we split the events batch as follows, then 
> the REPL LOAD on second batch throws NPE.
> Batch-1: CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> DROP_PARTITION (t1.p1)
> Batch-2: DROP_TABLE(t1) ->  CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> 
> DROP_PARTITION (t1.p1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-17502) Reuse of default session should not throw an exception in LLAP w/ Tez

2018-04-10 Thread Thai Bui (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thai Bui updated HIVE-17502:

Attachment: HIVE-17502.2.patch

> Reuse of default session should not throw an exception in LLAP w/ Tez
> -
>
> Key: HIVE-17502
> URL: https://issues.apache.org/jira/browse/HIVE-17502
> Project: Hive
>  Issue Type: Bug
>  Components: llap, Tez
>Affects Versions: 2.1.1, 2.2.0
> Environment: HDP 2.6.1.0-129, Hue 4
>Reporter: Thai Bui
>Assignee: Thai Bui
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-17502.2.patch, HIVE-17502.patch
>
>
> Hive2 w/ LLAP on Tez doesn't allow a currently used, default session to be 
> skipped mostly because of this line 
> https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/tez/TezSessionPoolManager.java#L365.
> However, some clients such as Hue 4, allow multiple sessions to be used per 
> user. Under this configuration, a Thrift client will send a request to either 
> reuse or open a new session. The reuse request could include the session id 
> of a currently used snippet being executed in Hue, this causes HS2 to throw 
> an exception:
> {noformat}
> 2017-09-10T17:51:36,548 INFO  [Thread-89]: tez.TezSessionPoolManager 
> (TezSessionPoolManager.java:canWorkWithSameSession(512)) - The current user: 
> hive, session user: hive
> 2017-09-10T17:51:36,549 ERROR [Thread-89]: exec.Task 
> (TezTask.java:execute(232)) - Failed to execute tez graph.
> org.apache.hadoop.hive.ql.metadata.HiveException: The pool session 
> sessionId=5b61a578-6336-41c5-860d-9838166f97fe, queueName=llap, user=hive, 
> doAs=false, isOpen=true, isDefault=true, expires in 591015330ms should have 
> been returned to the pool
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.canWorkWithSameSession(TezSessionPoolManager.java:534)
>  ~[hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at 
> org.apache.hadoop.hive.ql.exec.tez.TezSessionPoolManager.getSession(TezSessionPoolManager.java:544)
>  ~[hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:147) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
>   at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:79) 
> [hive-exec-2.1.0.2.6.1.0-129.jar:2.1.0.2.6.1.0-129]
> {noformat}
> Note that every query is issued as a single 'hive' user to share the LLAP 
> daemon pool, a set of pre-determined number of AMs is initialized at setup 
> time. Thus, HS2 should allow new sessions from a Thrift client to be used out 
> of the pool, or an existing session to be skipped and an unused session from 
> the pool to be returned. The logic to throw an exception in the  
> `canWorkWithSameSession` doesn't make sense to me.
> I have a solution to fix this issue in my local branch at 
> https://github.com/thaibui/hive/commit/078a521b9d0906fe6c0323b63e567f6eee2f3a70.
>  When applied, the log will become like so
> {noformat}
> 2017-09-10T09:15:33,578 INFO  [Thread-239]: tez.TezSessionPoolManager 
> (TezSessionPoolManager.java:canWorkWithSameSession(533)) - Skipping default 
> session sessionId=6638b1da-0f8a-405e-85f0-9586f484e6de, queueName=llap, 
> user=hive, doAs=false, isOpen=true, isDefault=true, expires in 591868732ms 
> since it is being used.
> {noformat}
> A test case is provided in my branch to demonstrate how it works. If possible 
> I would like this patch to be applied to version 2.1, 2.2 and master. Since 
> we are using 2.1 LLAP in production with Hue 4, this patch is critical to our 
> success.
> Alternatively, if this patch is too broad in scope, I propose adding an 
> option to allow "skipping of currently used default sessions". With this new 
> option default to "false", existing behavior won't change unless the option 
> is turned on.
> I will prepare an official path if this change to master &/ the other 
> branches is acceptable. I'm not an contributor &/ committer, this will be my 
> first time contributing to Hive and the Apache foundation. Any early review 
> is greatly appreciated, thanks!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-18696) The partition folders might not get cleaned up properly in the HiveMetaStore.add_partitions_core method if an exception occurs

2018-04-10 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-18696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-18696:
--
   Resolution: Fixed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

Pushed to master.
Thanks for the patch [~kuczoram]!

> The partition folders might not get cleaned up properly in the 
> HiveMetaStore.add_partitions_core method if an exception occurs
> --
>
> Key: HIVE-18696
> URL: https://issues.apache.org/jira/browse/HIVE-18696
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18696.1.patch, HIVE-18696.2.patch, 
> HIVE-18696.3.patch, HIVE-18696.4.patch, HIVE-18696.5.patch, HIVE-18696.6.patch
>
>
> When trying to add multiple partitions, but one of them cannot be created 
> successfully, none of the partitions are created, but the folders might not 
> be cleaned up properly. See the test case "testAddPartitionsOneInvalid" in 
> the TestAddPartitions test.
> This is the problematic code in the HiveMetaStore.add_partitions_core method:
> {code:java}
> for (final Partition part : parts) {
>   if (!part.getTableName().equals(tblName) || 
> !part.getDbName().equals(dbName)) {
> throw new MetaException("Partition does not belong to target 
> table "
> + dbName + "." + tblName + ": " + part);
>   }
>   boolean shouldAdd = startAddPartition(ms, part, ifNotExists);
>   if (!shouldAdd) {
> existingParts.add(part);
> LOG.info("Not adding partition " + part + " as it already 
> exists");
> continue;
>   }
>   final UserGroupInformation ugi;
>   try {
> ugi = UserGroupInformation.getCurrentUser();
>   } catch (IOException e) {
> throw new RuntimeException(e);
>   }
>   partFutures.add(threadPool.submit(new Callable() {
> @Override
> public Partition call() throws Exception {
>   ugi.doAs(new PrivilegedExceptionAction() {
> @Override
> public Object run() throws Exception {
>   try {
> boolean madeDir = createLocationForAddedPartition(table, 
> part);
> if (addedPartitions.put(new PartValEqWrapper(part), 
> madeDir) != null) {
>   // Technically, for ifNotExists case, we could insert 
> one and discard the other
>   // because the first one now "exists", but it seems 
> better to report the problem
>   // upstream as such a command doesn't make sense.
>   throw new MetaException("Duplicate partitions in the 
> list: " + part);
> }
> initializeAddedPartition(table, part, madeDir);
>   } catch (MetaException e) {
> throw new IOException(e.getMessage(), e);
>   }
>   return null;
> }
>   });
>   return part;
> }
>   }));
> }
> {code}
> When going through the partitions, let's say for the first two partitions the 
> threads are successfully submitted to create the folders. But an exception 
> occurs for the third partition in the code before submitting the thread. (It 
> can happen if the partition has different table or db name as the others or 
> it has invalid value.)
>  In this case the execution will jump to the finally part where the folders 
> in the "addedPartitions" map will be cleaned up. However it can happen that 
> the threads for the first two partitions are not finished with the folder 
> creation yet, so the map can be empty or it can contain only one of the 
> partitions.
> This issue also happens in the HiveMetastore.add_partitions_pspec_core 
> method, as this code part is the same as in the add_partitions_core method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19076) Fix NPE and TApplicationException in function related HiveMetastore methods

2018-04-10 Thread Peter Vary (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Vary updated HIVE-19076:
--
   Resolution: Fixed
Fix Version/s: 3.1.0
   Status: Resolved  (was: Patch Available)

Pushed to master.
Thanks for the patch [~kuczoram]!

> Fix NPE and TApplicationException in function related HiveMetastore methods
> ---
>
> Key: HIVE-19076
> URL: https://issues.apache.org/jira/browse/HIVE-19076
> Project: Hive
>  Issue Type: Bug
>Reporter: Marta Kuczora
>Assignee: Marta Kuczora
>Priority: Minor
> Fix For: 3.1.0
>
> Attachments: HIVE-19076.1.patch, HIVE-19076.2.patch, 
> HIVE-19076.3.patch
>
>
> The TestFunctions tests revealed that NPE is thrown in some cases. These NPEs 
> could be prevented with a simple null check and a MetaException with a proper 
> error message should be thrown instead.
>  Example: NPE is thrown in the following test cases
>  * testCreateFunctionNullFunctionName
>  * testCreateFunctionNullDatabaseName
>  * testCreateFunctionNullOwnerType
>  * testCreateFunctionNullFunctionType
>  * testGetFunctionNullDatabase
>  * testDropFunctionNullDatabase
>  * testDropFunctionNullFunctionName
>  * testAlterFunctionNullDatabase
>  * testAlterFunctionNullFunctionName
>  * testAlterFunctionNullFunction
>  * testAlterFunctionNullFunctionNameInNew
>  * testAlterFunctionNullDatabaseNameInNew
>  * testAlterFunctionNullOwnerTypeInNew
>  * testAlterFunctionNullFunctionTypeInNew
> Also there are some alter function tests where InvalidObjectException is 
> thrown with Embedded MetaStore, but TApplicationException it thrown with 
> Remote MetaStore. The reason is that the InvalidObjectException is not 
> defined for the alter_function method in the thrift interface, so we got the 
> TApplicationException when the InvalidObjectException was thrown. In these 
> cases the InvalidObjectException could be handled on the server side and 
> re-throw it as a MetaException



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-13697) ListBucketing feature does not support uppercase string.

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-13697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432342#comment-16432342
 ] 

Hive QA commented on HIVE-13697:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12803696/HIVE-13697.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 33 failed/errored test(s), 13247 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=253)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)

[nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,check_constraint_subquery.q,load_wrong_fileformat.q,check_constraint_udtf.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,authorization_insert_noinspriv.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,check_constraint_nonboolean_expr.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_column

[jira] [Commented] (HIVE-16944) schematool -dbType hive should give some more feedback/assistance

2018-04-10 Thread Carter Shanklin (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432349#comment-16432349
 ] 

Carter Shanklin commented on HIVE-16944:


[~bharos92] I'm not working with Hive these days so I won't be able to repro 
the NPE. Feel free to close if it seems the NPE went away for whatever reason.

> schematool -dbType hive should give some more feedback/assistance
> -
>
> Key: HIVE-16944
> URL: https://issues.apache.org/jira/browse/HIVE-16944
> Project: Hive
>  Issue Type: Bug
>Reporter: Carter Shanklin
>Assignee: Bharathkrishna Guruvayoor Murali
>Priority: Major
> Attachments: HIVE-16944.1.patch
>
>
> Given the other ways schematool is used, the most obvious guess I would have 
> for initializing the Hive schema is:
> {code}
> schematool -metaDbType mysql -dbType hive -initSchema
> {code}
> Unfortunately that fails with this NPE:
> {code}
> Exception in thread "main" java.lang.NullPointerException
>   at 
> org.apache.hadoop.hive.metastore.tools.HiveSchemaHelper.getDbCommandParser(HiveSchemaHelper.java:570)
>   at 
> org.apache.hadoop.hive.metastore.tools.HiveSchemaHelper.getDbCommandParser(HiveSchemaHelper.java:564)
>   at 
> org.apache.hadoop.hive.metastore.tools.HiveSchemaHelper.getDbCommandParser(HiveSchemaHelper.java:560)
>   at 
> org.apache.hadoop.hive.metastore.tools.HiveSchemaHelper$HiveCommandParser.(HiveSchemaHelper.java:373)
>   at 
> org.apache.hadoop.hive.metastore.tools.HiveSchemaHelper.getDbCommandParser(HiveSchemaHelper.java:573)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.getDbCommandParser(HiveSchemaTool.java:165)
>   at 
> org.apache.hive.beeline.HiveSchemaTool.(HiveSchemaTool.java:101)
>   at org.apache.hive.beeline.HiveSchemaTool.(HiveSchemaTool.java:90)
>   at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1166)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at org.apache.hadoop.util.RunJar.run(RunJar.java:233)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:148)
> {code}
> Two additional arguments are needed:
> -url jdbc:hive2://localhost:1/default -driver 
> org.apache.hive.jdbc.HiveDriver
> If the user does not supply these for dbType hive, schematool should detect 
> and error out appropriately, plus give an example of what it's looking for.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18265) desc formatted/extended or show create table can not fully display the result when field or table comment contains tab character

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432391#comment-16432391
 ] 

Hive QA commented on HIVE-18265:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
34s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
16s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
58s{color} | {color:red} ql: The patch generated 3 new + 894 unchanged - 0 
fixed = 897 total (was 894) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10119/dev-support/hive-personality.sh
 |
| git revision | master / 820db60 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10119/yetus/diff-checkstyle-ql.txt
 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10119/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> desc formatted/extended or show create table can not fully display the result 
> when field or table comment contains tab character
> 
>
> Key: HIVE-18265
> URL: https://issues.apache.org/jira/browse/HIVE-18265
> Project: Hive
>  Issue Type: Bug
>  Components: CLI
>Affects Versions: 1.2.1, 3.0.0
>Reporter: Hui Huang
>Assignee: Hui Huang
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18265.1.patch, HIVE-18265.patch
>
>
> Here are some examples:
> create table test_comment (id1 string comment 'full_\tname1', id2 string 
> comment 'full_\tname2', id3 string comment 'full_\tname3') stored as textfile;
> When execute `show create table test_comment`, we can see the following 
> content in the console,
> {quote}
> createtab_stmt
> CREATE TABLE `test_comment`(
>   `id1` string COMMENT 'full_
>   `id2` string COMMENT 'full_
>   `id3` string COMMENT 'full_
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.mapred.TextInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
> LOCATION
>   'hdfs://xxx/user/huanghui/warehouse/huanghuitest.db/test_comment'
> TBLPROPERTIES (
>   'transient_lastDdlTime'='1513095570')
> {quote}
> And the output of `desc formatted table ` is a little similar,
> {quote}
> col_name  data_type   comment
> \# col_name   data_type   comment
> id1   string   

[jira] [Assigned] (HIVE-19151) Update expected result for some TestNegativeCliDriver tests

2018-04-10 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates reassigned HIVE-19151:
-


> Update expected result for some TestNegativeCliDriver tests
> ---
>
> Key: HIVE-19151
> URL: https://issues.apache.org/jira/browse/HIVE-19151
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Blocker
> Fix For: 3.0.0
>
>
> Several TestNegativeCliDriver tests are failing because error messages have 
> changed.  They need their expected results updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19148) derby/MV_CREATION_METADATA misses CAT_NAME column

2018-04-10 Thread Alan Gates (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432465#comment-16432465
 ] 

Alan Gates commented on HIVE-19148:
---

You are correct that this was missed in the metastore/scripts/upgrade/derby 
upgrade files.  However, now that HIVE-18775 is in, this shouldn't matter as 
schematool should now be properly pulling the install and upgrade scripts from 
standalone-metastore/src/main/sql/derby instead.

> derby/MV_CREATION_METADATA misses CAT_NAME column
> -
>
> Key: HIVE-19148
> URL: https://issues.apache.org/jira/browse/HIVE-19148
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19148.01.patch
>
>
> It seems like the upgrade patches for derby misses CAT_NAME for the table 
> MV_CREATION_METADATA



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-19151) Update expected result for some TestNegativeCliDriver tests

2018-04-10 Thread Alan Gates (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Gates resolved HIVE-19151.
---
Resolution: Duplicate

> Update expected result for some TestNegativeCliDriver tests
> ---
>
> Key: HIVE-19151
> URL: https://issues.apache.org/jira/browse/HIVE-19151
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Alan Gates
>Priority: Blocker
> Fix For: 3.0.0
>
>
> Several TestNegativeCliDriver tests are failing because error messages have 
> changed.  They need their expected results updated.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18265) desc formatted/extended or show create table can not fully display the result when field or table comment contains tab character

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432524#comment-16432524
 ] 

Hive QA commented on HIVE-18265:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12902065/HIVE-18265.1.patch

{color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 66 failed/errored test(s), 13601 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestCopyUtils - did not produce a TEST-*.xml file (likely timed out) 
(batchId=231)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=253)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=96)

[udf_invalid.q,authorization_uri_export.q,default_constraint_complex_default_value.q,druid_datasource2.q,check_constraint_max_length.q,view_update.q,default_partition_name.q,create_table_failure5.q,authorization_public_create.q,load_wrong_fileformat_rc_seq.q,default_constraint_invalid_type.q,altern1.q,describe_xpath1.q,drop_view_failure2.q,temp_table_rename.q,invalid_select_column_with_subquery.q,udf_trunc_error1.q,insert_view_failure.q,dbtxnmgr_nodbunlock.q,authorization_show_columns.q,cte_recursion.q,merge_constraint_notnull.q,load_part_nospec.q,clusterbyorderby.q,orc_type_promotion2.q,ctas_noperm_loc.q,udf_min.q,udf_instr_wrong_args_len.q,invalid_create_tbl2.q,part_col_complex_type.q,authorization_drop_db_empty.q,smb_mapjoin_14.q,subquery_scalar_multi_rows.q,alter_partition_coltype_2columns.q,subquery_corr_in_agg.q,insert_overwrite_notnull_constraint.q,authorization_show_grant_otheruser_wtab.q,regex_col_groupby.q,ptf_negative_DuplicateWindowAlias.q,exim_22_export_authfail.q,authorization_insert_noinspriv.q,udf_likeany_wrong1.q,groupby_key.q,ambiguous_col.q,groupby3_multi_distinct.q,authorization_alter_drop_ptn.q,invalid_cast_from_binary_5.q,show_create_table_does_not_exist.q,invalid_select_column.q,exim_20_managed_location_over_existing.q,interval_3.q,authorization_compile.q,join35.q,udf_concat_ws_wrong3.q,create_or_replace_view8.q,create_external_with_notnull_constraint.q,split_sample_out_of_range.q,materialized_view_no_transactional_rewrite.q,authorization_show_grant_otherrole.q,create_with_constraints_duplicate_name.q,invalid_stddev_samp_syntax.q,authorization_view_disable_cbo_7.q,autolocal1.q,avro_non_nullable_union.q,load_orc_negative_part.q,drop_view_failure1.q,columnstats_partlvl_invalid_values_autogather.q,exim_13_nonnative_import.q,alter_table_wrong_regex.q,udf_next_day_error_2.q,authorization_select.q,udf_trunc_error2.q,authorization_view_7.q,udf_format_number_wrong5.q,touch2.q,orc_type_promotion1.q,lateral_view_alias.q,show_tables_bad_db1.q,unset_table_property.q,alter_non_native.q,nvl_mismatch_type.q,load_orc_negative3.q,invalid_distinct1.q,authorization_grant_server.q,orc_type_promotion3_acid.q,hms_using_serde_alter_table_update_columns.q,show_tables_bad1.q,macro_unused_parameter.q,drop_invalid_constraint3.q,drop_partition_filter_failure.q,char_pad_convert_fail3.q,exim_23_import_exist_authfail.q,drop_invalid_constraint4.q,authorization_create_macro1.q,archive1.q,subquery_multiple_cols_in_select.q,change_hive_hdfs_session_path.q,udf_trunc_error3.q,invalid_variance_syntax.q,authorization_truncate_2.q,invalid_avg_syntax.q,invalid_select_column_with_tablename.q,mm_truncate_cols.q,groupby_grouping_sets1.q,druid_location.q,groupby2_multi_distinct.q,authorization_sba_drop_table.q,dynamic_partitions_with_whitelist.q,compare_string_bigint_2.q,udf_greatest_error_2.q,authorization_view_6.q,show_tablestatus.q,duplicate_alias_in_transform_schema.q,create_with_fk_uk_same_tab.q,udtf_not_supported3.q,alter_table_constraint_invalid_fk_col2.q,udtf_not_supported1.q,dbtxnmgr_notableunlock.q,ptf_negative_InvalidValueBoundary.q,alter_table_constraint_duplicate_pk.q,udf_printf_wrong4.q,create_view_failure9.q,udf_elt_wrong_type.q,selectDistinctStarNeg_1.q,invalid_mapjoin1.q,load_stored_as_dirs.q,input1.q,udf_sort_array_wrong1.q,invalid_distinct2.q,invalid_select_fn.q,authorization_role_grant_otherrole.q,archive4.q,load_nonpart_authfail.q,recursive_view.q,authorization_view_disable_cbo_1.q,desc_failure4.q,create_not_acid.q,udf_sort_array_wrong3.q,char_pad_convert_fail0.q,udf_map_values_arg_type.q,alter_view_failure6_2.q,alter_partition_change_col_nonexist.q,update_non_acid_table.q,authorization_view_disable_cbo_5.q,ct_noperm_loc.q,interval_1.

[jira] [Updated] (HIVE-17824) msck repair table should drop the missing partitions from metastore

2018-04-10 Thread Janaki Lahorani (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Janaki Lahorani updated HIVE-17824:
---
Attachment: HIVE-17824.3.patch

> msck repair table should drop the missing partitions from metastore
> ---
>
> Key: HIVE-17824
> URL: https://issues.apache.org/jira/browse/HIVE-17824
> Project: Hive
>  Issue Type: Improvement
>Reporter: Vihang Karajgaonkar
>Assignee: Janaki Lahorani
>Priority: Major
> Attachments: HIVE-17824.1.patch, HIVE-17824.2.patch, 
> HIVE-17824.3.patch
>
>
> {{msck repair table }} is often used in environments where the new 
> partitions are loaded as directories on HDFS or S3 and users want to create 
> the missing partitions in bulk. However, currently it only supports addition 
> of missing partitions. If there are any partitions which are present in 
> metastore but not on the FileSystem, it should also delete them so that it 
> truly repairs the table metadata.
> We should be careful not to break backwards compatibility so we should either 
> introduce a new config or keyword to add support to delete unnecessary 
> partitions from the metastore. This way users who want the old behavior can 
> easily turn it off. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18279) Incorrect condition in StatsOpimizer

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432550#comment-16432550
 ] 

Hive QA commented on HIVE-18279:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
1s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
19s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
5s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10120/dev-support/hive-personality.sh
 |
| git revision | master / 644932d |
| Default Java | 1.8.0_111 |
| modules | C: ql U: ql |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10120/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Incorrect condition in StatsOpimizer
> 
>
> Key: HIVE-18279
> URL: https://issues.apache.org/jira/browse/HIVE-18279
> Project: Hive
>  Issue Type: Bug
>  Components: Statistics
>Reporter: Oleksiy Sayankin
>Assignee: Oleksiy Sayankin
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-18279.1.patch
>
>
> At the moment {{StatsOpimizer}} has code
> {code}
> if (rowCnt == null) {
>   // if rowCnt < 1 than its either empty table or table on which 
> stats are not
>   //  computed We assume the worse and don't attempt to optimize.
>   Logger.debug("Table doesn't have up to date stats " + 
> tbl.getTableName());
>   rowCnt = null;
> }
> {code}
> in method {{private Long getRowCnt()}}. Condition 
> {code}
> if (rowCnt == null) {
> {code}
> should be changed to 
> {code}
> if (rowCnt == null || rowCnt == 0) {
> {code}
> because 0 value also means that table stats may not be computed.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19089) Create/Replicate Allocate write-id event

2018-04-10 Thread mahesh kumar behera (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-19089:
---
Attachment: HIVE-19089.05.patch

> Create/Replicate Allocate write-id event
> 
>
> Key: HIVE-19089
> URL: https://issues.apache.org/jira/browse/HIVE-19089
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, replication
> Fix For: 3.1.0
>
> Attachments: HIVE-19089.01.patch, HIVE-19089.02.patch, 
> HIVE-19089.03.patch, HIVE-19089.04.patch, HIVE-19089.05.patch
>
>
> *EVENT_ALLOCATE_WRITE_ID*
> *Source Warehouse:*
>  * Create new event type EVENT_ALLOCATE_WRITE_ID with related message format 
> etc.
>  * Capture this event when allocate a table write ID from the sequence table 
> by ACID operation.
>  * Repl dump should read this event from EventNotificationTable and dump the 
> message.
> *Target Warehouse:*
>  * Repl load should read the event from the dump and get the message.
>  * Validate if source txn ID from the event is there in the source-target txn 
> ID map. If not there, just noop the event.
>  * If valid, then Allocate table write ID from sequence table
> *Extend listener notify event API to add two new parameter , dbconn and 
> sqlgenerator to add the events to notification_log table within the same 
> transaction* 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19153) Update golden files for few tests

2018-04-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan reassigned HIVE-19153:
---


> Update golden files for few tests
> -
>
> Key: HIVE-19153
> URL: https://issues.apache.org/jira/browse/HIVE-19153
> Project: Hive
>  Issue Type: Test
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
>
> Some golden file updates which were missed since many tests were failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19153) Update golden files for few tests

2018-04-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan updated HIVE-19153:

Attachment: HIVE-19153.patch

> Update golden files for few tests
> -
>
> Key: HIVE-19153
> URL: https://issues.apache.org/jira/browse/HIVE-19153
> Project: Hive
>  Issue Type: Test
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Attachments: HIVE-19153.patch
>
>
> Some golden file updates which were missed since many tests were failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (HIVE-19153) Update golden files for few tests

2018-04-10 Thread Ashutosh Chauhan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashutosh Chauhan resolved HIVE-19153.
-
   Resolution: Fixed
Fix Version/s: 3.1.0

Pushed to master.

[~djaiswal] Plan for tez_smb_1.q has changed. AFAICT it has stopped doing SMB 
join which was the intention of test. You may want to take a look.

> Update golden files for few tests
> -
>
> Key: HIVE-19153
> URL: https://issues.apache.org/jira/browse/HIVE-19153
> Project: Hive
>  Issue Type: Test
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19153.patch
>
>
> Some golden file updates which were missed since many tests were failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18279) Incorrect condition in StatsOpimizer

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432654#comment-16432654
 ] 

Hive QA commented on HIVE-18279:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12902099/HIVE-18279.1.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 152 failed/errored test(s), 13642 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=253)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)

[nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,check_constraint_subquery.q,load_wrong_fileformat.q,check_constraint_udtf.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,authorization_insert_noinspriv.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,check_constraint_nonboolean_expr.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_colum

[jira] [Updated] (HIVE-19147) Fix PerfCliDrivers: Tpcds30T missed CAT_NAME change

2018-04-10 Thread Zoltan Haindrich (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19147?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zoltan Haindrich updated HIVE-19147:

Issue Type: Sub-task  (was: Bug)
Parent: HIVE-19142

> Fix PerfCliDrivers: Tpcds30T missed CAT_NAME change
> ---
>
> Key: HIVE-19147
> URL: https://issues.apache.org/jira/browse/HIVE-19147
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Zoltan Haindrich
>Assignee: Zoltan Haindrich
>Priority: Major
> Attachments: HIVE-19147.01.patch
>
>
> it seems the baked metastore dump misses the CAT_NAME field added by some 
> recent metastore change



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19144) TestSparkCliDriver:subquery_scalar - golden file needs to be udpated

2018-04-10 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19144:
---
Fix Version/s: (was: 3.1.0)
   3.0.0

> TestSparkCliDriver:subquery_scalar - golden file needs to be udpated
> 
>
> Key: HIVE-19144
> URL: https://issues.apache.org/jira/browse/HIVE-19144
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19144.1.patch
>
>
> Looks like HIVE-18979 missed the update



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19144) TestSparkCliDriver:subquery_scalar - golden file needs to be udpated

2018-04-10 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432681#comment-16432681
 ] 

Vineet Garg commented on HIVE-19144:


Pushed to branch-3 as well

> TestSparkCliDriver:subquery_scalar - golden file needs to be udpated
> 
>
> Key: HIVE-19144
> URL: https://issues.apache.org/jira/browse/HIVE-19144
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19144.1.patch
>
>
> Looks like HIVE-18979 missed the update



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16041) HCatalog doesn't delete temp _SCRATCH dir when job failed

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432696#comment-16432696
 ] 

Hive QA commented on HIVE-16041:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
14s{color} | {color:red} hcatalog/core: The patch generated 2 new + 50 
unchanged - 0 fixed = 52 total (was 50) {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 1 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 11m  4s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10121/dev-support/hive-personality.sh
 |
| git revision | master / 4b8c754 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10121/yetus/diff-checkstyle-hcatalog_core.txt
 |
| whitespace | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10121/yetus/whitespace-tabs.txt
 |
| modules | C: hcatalog/core U: hcatalog/core |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10121/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> HCatalog doesn't delete temp  _SCRATCH dir when job failed
> --
>
> Key: HIVE-16041
> URL: https://issues.apache.org/jira/browse/HIVE-16041
> Project: Hive
>  Issue Type: Bug
>  Components: HCatalog
>Affects Versions: 2.2.0
>Reporter: yunfei liu
>Assignee: yunfei liu
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-16041.1.patch, HIVE-16041.2.patch
>
>
> when we use HCatOutputFormat to write to an external partitioned table,  a 
> tmp dir (which starts with "_SCRATCH" ) will appear under table path if the 
> job failed. 
> {quote}
> drwxr-xr-x   - yun hdfs  0 2017-02-27 01:45 
> /tmp/hive/_SCRATCH0.31946356159329714
> drwxr-xr-x   - yun hdfs  0 2017-02-27 01:51 
> /tmp/hive/_SCRATCH0.31946356159329714/c1=1
> drwxr-xr-x   - yun hdfs  0 2017-02-27 00:57 /tmp/hive/c1=1
> drwxr-xr-x   - yun hdfs  0 2017-02-27 01:28 /tmp/hive/c1=1/c2=2
> -rw-r--r--   3 yun hdfs 12 2017-02-27 00:57 
> /tmp/hive/c1=1/c2=2/part-r-0
> -rw-r--r--   3 yun hdfs 12 2017-02-27 01:28 
> /tmp/hive/c1=1/c2=2/part-r-0_a_1
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HIVE-19154) Poll notification events to invalidate the results cache

2018-04-10 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere reassigned HIVE-19154:
-


> Poll notification events to invalidate the results cache
> 
>
> Key: HIVE-19154
> URL: https://issues.apache.org/jira/browse/HIVE-19154
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>
> Related to the work for HIVE-18609. HIVE-18609 will only invalidate entries 
> in the cache if that query looked up again, which could potentially leave a 
> lot of undetected invalid entries in the cache taking up space which could 
> cause other entries to be evicted. To remove these entries in a more timely 
> fashion, have a background thread to periodically check the notification 
> events for updates to the tables used in the results cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19089) Create/Replicate Allocate write-id event

2018-04-10 Thread mahesh kumar behera (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

mahesh kumar behera updated HIVE-19089:
---
Attachment: HIVE-19089.06.patch

> Create/Replicate Allocate write-id event
> 
>
> Key: HIVE-19089
> URL: https://issues.apache.org/jira/browse/HIVE-19089
> Project: Hive
>  Issue Type: Sub-task
>  Components: repl, Transactions
>Affects Versions: 3.0.0
>Reporter: mahesh kumar behera
>Assignee: mahesh kumar behera
>Priority: Major
>  Labels: ACID, DR, replication
> Fix For: 3.1.0
>
> Attachments: HIVE-19089.01.patch, HIVE-19089.02.patch, 
> HIVE-19089.03.patch, HIVE-19089.04.patch, HIVE-19089.05.patch, 
> HIVE-19089.06.patch
>
>
> *EVENT_ALLOCATE_WRITE_ID*
> *Source Warehouse:*
>  * Create new event type EVENT_ALLOCATE_WRITE_ID with related message format 
> etc.
>  * Capture this event when allocate a table write ID from the sequence table 
> by ACID operation.
>  * Repl dump should read this event from EventNotificationTable and dump the 
> message.
> *Target Warehouse:*
>  * Repl load should read the event from the dump and get the message.
>  * Validate if source txn ID from the event is there in the source-target txn 
> ID map. If not there, just noop the event.
>  * If valid, then Allocate table write ID from sequence table
> *Extend listener notify event API to add two new parameter , dbconn and 
> sqlgenerator to add the events to notification_log table within the same 
> transaction* 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19154) Poll notification events to invalidate the results cache

2018-04-10 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-19154:
--
Attachment: HIVE-19154.1.patch

> Poll notification events to invalidate the results cache
> 
>
> Key: HIVE-19154
> URL: https://issues.apache.org/jira/browse/HIVE-19154
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-19154.1.patch
>
>
> Related to the work for HIVE-18609. HIVE-18609 will only invalidate entries 
> in the cache if that query looked up again, which could potentially leave a 
> lot of undetected invalid entries in the cache taking up space which could 
> cause other entries to be evicted. To remove these entries in a more timely 
> fashion, have a background thread to periodically check the notification 
> events for updates to the tables used in the results cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19154) Poll notification events to invalidate the results cache

2018-04-10 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere updated HIVE-19154:
--
Status: Patch Available  (was: Open)

> Poll notification events to invalidate the results cache
> 
>
> Key: HIVE-19154
> URL: https://issues.apache.org/jira/browse/HIVE-19154
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-19154.1.patch
>
>
> Related to the work for HIVE-18609. HIVE-18609 will only invalidate entries 
> in the cache if that query looked up again, which could potentially leave a 
> lot of undetected invalid entries in the cache taking up space which could 
> cause other entries to be evicted. To remove these entries in a more timely 
> fashion, have a background thread to periodically check the notification 
> events for updates to the tables used in the results cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19154) Poll notification events to invalidate the results cache

2018-04-10 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432800#comment-16432800
 ] 

Jason Dere commented on HIVE-19154:
---

RB at https://reviews.apache.org/r/66533/
[~thejas] [~gopalv] do you mind taking a look?

> Poll notification events to invalidate the results cache
> 
>
> Key: HIVE-19154
> URL: https://issues.apache.org/jira/browse/HIVE-19154
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-19154.1.patch
>
>
> Related to the work for HIVE-18609. HIVE-18609 will only invalidate entries 
> in the cache if that query looked up again, which could potentially leave a 
> lot of undetected invalid entries in the cache taking up space which could 
> cause other entries to be evicted. To remove these entries in a more timely 
> fashion, have a background thread to periodically check the notification 
> events for updates to the tables used in the results cache.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-16041) HCatalog doesn't delete temp _SCRATCH dir when job failed

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432813#comment-16432813
 ] 

Hive QA commented on HIVE-16041:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12854802/HIVE-16041.2.patch

{color:red}ERROR:{color} -1 due to no test(s) being added or modified.

{color:red}ERROR:{color} -1 due to 30 failed/errored test(s), 13260 tests 
executed
*Failed tests:*
{noformat}
TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253)
TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed 
out) (batchId=247)
TestHs2HooksWithMiniKdc - did not produce a TEST-*.xml file (likely timed out) 
(batchId=254)
TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=253)
TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed 
out) (batchId=253)
TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) 
(batchId=95)

[nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,check_constraint_subquery.q,load_wrong_fileformat.q,check_constraint_udtf.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,authorization_insert_noinspriv.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,check_constraint_nonboolean_expr.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_v

[jira] [Commented] (HIVE-14696) Hive Query Fail with MetaException(message:org.datanucleus.exceptions.NucleusDataStoreException: Size request failed

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432815#comment-16432815
 ] 

Hive QA commented on HIVE-14696:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12826858/HIVE-14696.1.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10122/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10122/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10122/

Messages:
{noformat}
Executing org.apache.hive.ptest.execution.TestCheckPhase
Executing org.apache.hive.ptest.execution.PrepPhase
Tests exited with: NonZeroExitCodeException
Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit 
status 1 and output '+ date '+%Y-%m-%d %T.%3N'
2018-04-10 19:29:36.961
+ [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]]
+ export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
+ export 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ 
PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
+ export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m '
+ ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m '
+ export 'MAVEN_OPTS=-Xmx1g '
+ MAVEN_OPTS='-Xmx1g '
+ cd /data/hiveptest/working/
+ tee /data/hiveptest/logs/PreCommit-HIVE-Build-10122/source-prep.txt
+ [[ false == \t\r\u\e ]]
+ mkdir -p maven ivy
+ [[ git = \s\v\n ]]
+ [[ git = \g\i\t ]]
+ [[ -z master ]]
+ [[ -d apache-github-source-source ]]
+ [[ ! -d apache-github-source-source/.git ]]
+ [[ ! -d apache-github-source-source ]]
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 19:29:36.963
+ cd apache-github-source-source
+ git fetch origin
>From https://github.com/apache/hive
   c695c70..8b2d514  branch-3   -> origin/branch-3
+ git reset --hard HEAD
HEAD is now at 4b8c754 HIVE-19153 : Update golden files for few tests
+ git clean -f -d
+ git checkout master
Already on 'master'
Your branch is up-to-date with 'origin/master'.
+ git reset --hard origin/master
HEAD is now at 4b8c754 HIVE-19153 : Update golden files for few tests
+ git merge --ff-only origin/master
Already up-to-date.
+ date '+%Y-%m-%d %T.%3N'
2018-04-10 19:29:40.649
+ rm -rf ../yetus_PreCommit-HIVE-Build-10122
+ mkdir ../yetus_PreCommit-HIVE-Build-10122
+ git gc
+ cp -R . ../yetus_PreCommit-HIVE-Build-10122
+ mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10122/yetus
+ patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh
+ patchFilePath=/data/hiveptest/working/scratch/build.patch
+ [[ -f /data/hiveptest/working/scratch/build.patch ]]
+ chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh
+ /data/hiveptest/working/scratch/smart-apply-patch.sh 
/data/hiveptest/working/scratch/build.patch
error: 
a/metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java:
 does not exist in index
error: 
metastore/src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java:
 does not exist in index
error: src/java/org/apache/hadoop/hive/metastore/RetryingMetaStoreClient.java: 
does not exist in index
The patch does not appear to apply with p0, p1, or p2
+ exit 1
'
{noformat}

This message is automatically generated.

ATTACHMENT ID: 12826858 - PreCommit-HIVE-Build

>  Hive Query Fail with 
> MetaException(message:org.datanucleus.exceptions.NucleusDataStoreException: 
> Size request failed
> -
>
> Key: HIVE-14696
> URL: https://issues.apache.org/jira/browse/HIVE-14696
> Project: Hive
>  Issue Type: Bug
>  Components: Metastore
>Reporter: Oleksiy Sayankin
>Assignee: Oleksiy Sayankin
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-14696.1.patch
>
>
> We have a customer who is on Hive 0.13 and the queries seem to be failing 
> with exception:
> {code}
> 2016-08-30 00:22:58,965 ERROR [main]: metadata.Hive
> (Hive.java:getPartition(1619)) -
> MetaException(message:org.datanucleus.exceptions.NucleusDataStoreException:
> Size request failed : SELECT COUNT(*) FROM `SORT_COLS` THIS WHERE
> THIS.`SD_ID`=? AND THIS.`INTEGER_IDX`>=0)
> at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partition_with_auth_result$get_partition_with_auth_resultStandardScheme.read(ThriftHiveMetastore.java:54171)
> at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partition_with_auth_result$get_partition_with_auth_resultStandardScheme.read(ThriftHiveMetastore.java:54148)
> at
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$get_partition_with_auth_result.read(ThriftHiveMetastore.java:54

[jira] [Commented] (HIVE-14981) Eliminate unnecessary MapJoin restriction in HIVE-11394

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-14981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432822#comment-16432822
 ] 

Hive QA commented on HIVE-14981:




Here are the results of testing the latest attachment:
https://issues.apache.org/jira/secure/attachment/12833777/HIVE-14981.02.patch

{color:red}ERROR:{color} -1 due to build exiting with an error

Test results: 
https://builds.apache.org/job/PreCommit-HIVE-Build/10123/testReport
Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10123/console
Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10123/

Messages:
{noformat}
 This message was trimmed, see log for full details 
Falling back to three-way merge...
Applied patch to 
'ql/src/java/org/apache/hadoop/hive/ql/plan/VectorMapJoinDesc.java' cleanly.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_between_columns.q.out:115
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_between_columns.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_binary_join_groupby.q.out:461
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_binary_join_groupby.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_char_mapjoin1.q.out:179
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_char_mapjoin1.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_decimal_mapjoin.q.out:126
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_decimal_mapjoin.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_include_no_sel.q.out:250
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_include_no_sel.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_inner_join.q.out:88
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_inner_join.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_interval_mapjoin.q.out:229
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_interval_mapjoin.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_left_outer_join2.q.out:318
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_left_outer_join2.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_leftsemi_mapjoin.q.out:3365
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_leftsemi_mapjoin.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_mapjoin_reduce.q.out:155
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_mapjoin_reduce.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_nullsafe_join.q.out:87
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_nullsafe_join.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_outer_join0.q.out:113
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_outer_join0.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_outer_join1.q.out:273
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_outer_join1.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vector_outer_join2.q.out:289
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vector_outer_join2.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/llap/vectorized_mapjoin.q.out:59
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/llap/vectorized_mapjoin.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/spark/vector_decimal_mapjoin.q.out:170
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/spark/vector_decimal_mapjoin.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/spark/vector_inner_join.q.out:138
Falling back to three-way merge...
Applied patch to 
'ql/src/test/results/clientpositive/spark/vector_inner_join.q.out' with 
conflicts.
error: patch failed: 
ql/src/test/results/clientpositive/spark/vector_outer_join0.q.out:155
Falling back to three-way merge...
Applied patch to 
'ql/

[jira] [Updated] (HIVE-19145) Stabilize statsoptimizer.q test

2018-04-10 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19145:
---
Fix Version/s: (was: 3.1.0)
   3.0.0

> Stabilize statsoptimizer.q test
> ---
>
> Key: HIVE-19145
> URL: https://issues.apache.org/jira/browse/HIVE-19145
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19145.1.patch
>
>
> Uses current_date() which is prone to fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19153) Update golden files for few tests

2018-04-10 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19153:
---
Issue Type: Sub-task  (was: Test)
Parent: HIVE-19142

> Update golden files for few tests
> -
>
> Key: HIVE-19153
> URL: https://issues.apache.org/jira/browse/HIVE-19153
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 3.1.0
>
> Attachments: HIVE-19153.patch
>
>
> Some golden file updates which were missed since many tests were failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19153) Update golden files for few tests

2018-04-10 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19153:
---
Fix Version/s: (was: 3.1.0)
   3.0.0

> Update golden files for few tests
> -
>
> Key: HIVE-19153
> URL: https://issues.apache.org/jira/browse/HIVE-19153
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19153.patch
>
>
> Some golden file updates which were missed since many tests were failing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19146) Delete dangling q.out

2018-04-10 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19146:
---
Fix Version/s: (was: 3.1.0)
   3.0.0

> Delete dangling q.out 
> --
>
> Key: HIVE-19146
> URL: https://issues.apache.org/jira/browse/HIVE-19146
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19146.patch
>
>
> Fails TestDanglingQOuts



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19146) Delete dangling q.out

2018-04-10 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19146:
---
Issue Type: Sub-task  (was: Test)
Parent: HIVE-19142

> Delete dangling q.out 
> --
>
> Key: HIVE-19146
> URL: https://issues.apache.org/jira/browse/HIVE-19146
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19146.patch
>
>
> Fails TestDanglingQOuts



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19146) Delete dangling q.out

2018-04-10 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432834#comment-16432834
 ] 

Vineet Garg commented on HIVE-19146:


Pushed to branch-3

> Delete dangling q.out 
> --
>
> Key: HIVE-19146
> URL: https://issues.apache.org/jira/browse/HIVE-19146
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19146.patch
>
>
> Fails TestDanglingQOuts



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19145) Stabilize statsoptimizer.q test

2018-04-10 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19145?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432835#comment-16432835
 ] 

Vineet Garg commented on HIVE-19145:


Pushed to branch-3

> Stabilize statsoptimizer.q test
> ---
>
> Key: HIVE-19145
> URL: https://issues.apache.org/jira/browse/HIVE-19145
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19145.1.patch
>
>
> Uses current_date() which is prone to fail.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19153) Update golden files for few tests

2018-04-10 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19153?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19153:
---
Description: 
Some golden file updates which were missed since many tests were failing.

Following test golden files were updated:

acid_table_stats
bucket_map_join_tez_empty
default_constraint
insert_values_orig_table_use_metadata
tez_smb_1

  was:Some golden file updates which were missed since many tests were failing.


> Update golden files for few tests
> -
>
> Key: HIVE-19153
> URL: https://issues.apache.org/jira/browse/HIVE-19153
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19153.patch
>
>
> Some golden file updates which were missed since many tests were failing.
> Following test golden files were updated:
> acid_table_stats
> bucket_map_join_tez_empty
> default_constraint
> insert_values_orig_table_use_metadata
> tez_smb_1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19153) Update golden files for few tests

2018-04-10 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19153?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432839#comment-16432839
 ] 

Vineet Garg commented on HIVE-19153:


Pushed to branch-3

> Update golden files for few tests
> -
>
> Key: HIVE-19153
> URL: https://issues.apache.org/jira/browse/HIVE-19153
> Project: Hive
>  Issue Type: Sub-task
>  Components: Test
>Reporter: Ashutosh Chauhan
>Assignee: Ashutosh Chauhan
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HIVE-19153.patch
>
>
> Some golden file updates which were missed since many tests were failing.
> Following test golden files were updated:
> acid_table_stats
> bucket_map_join_tez_empty
> default_constraint
> insert_values_orig_table_use_metadata
> tez_smb_1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19142) Umbrella: branch-3 failing tests

2018-04-10 Thread Vineet Garg (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vineet Garg updated HIVE-19142:
---
Description: 
This is the list [~alangates] specified on HIVE-19135 which are non-oom test 
failures:

*Errors*:
TestAcidOnTez.testGetSplitsLocks
TestJdbcWithLocalClusterSpark.testSparkQuery
TestJdbcWithLocalClusterSpark.testTempTable
TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd
TestMTQueries.testMTQueries1
TestMultiSessionsHS2WithLocalClusterSpark.testSparkQuery
TestNegativeCliDriver.alter_notnull_constraint_violation
TestNegativeCliDriver.insert_into_acid_notnull
TestNegativeCliDriver.insert_into_notnull_constraint
TestNegativeCliDriver.insert_multi_into_notnull
TestNegativeCliDriver.insert_overwrite_notnull_constraint
TestNegativeCliDriver.update_notnull_constraint

*Failures*:
TestBlobstoreCliDriver.insert_into_dynamic_partitions
TestBlobstoreCliDriver.insert_overwrite_directory
TestBlobstoreCliDriver.insert_overwrite_dynamic_partitions
-TestCliDriver.acid_table_stats-
TestCliDriver.auto_sortmerge_join_2
TestCliDriver.avro_alter_table_update_columns
TestCliDriver.avrotblsjoin
TestCliDriver.dbtxnmgr_showlocks
TestCliDriver.orc_merge10
TestCliDriver.orc_schema_evolution_float
TestCliDriver.parquet_ppd_multifiles
TestCliDriver.schema_evol_par_vec_table_dictionary_encoding
TestCliDriver.schema_evol_par_vec_table_non_dictionary_encoding
TestCliDriver.selectindate
-TestCliDriver.statsoptimizer-
TestCliDriver.vector_bround
TestCliDriver.vector_case_when_1
TestCliDriver.vector_coalesce_2
TestCliDriver.vector_coalesce_3
TestCliDriver.vector_interval_1
TestCliDriver.vectorized_parquet_types
TestMetastoreVersion.testMetastoreVersion
TestMetastoreVersion.testVersionMatching
TestMiniDruidCliDriver.druidkafkamini_basic
TestMiniLlapCliDriver.llap_smb
TestMiniLlapCliDriver.unionDistinct_1
TestMiniTezCliDriver.explainanalyze_5
-TestNegativeCliDriver.authorization_caseinsensitivity-
-TestNegativeCliDriver.authorization_fail_1-
-TestNegativeCliDriver.authorization_grant_table_dup-
-TestNegativeCliDriver.authorization_role_case-
-TestNegativeCliDriver.authorization_role_grant_nosuchrole-
-TestNegativeCliDriver.authorization_table_grant_nosuchrole-
TestNegativeCliDriver.subquery_subquery_chain
TestSessionState.testCreatePath
TestSessionState.testCreatePath
TestSparkStatistics.testSparkStatistics


  was:
This is the list [~alangates] specified on HIVE-19135 which are non-oom test 
failures:

*Errors*:
TestAcidOnTez.testGetSplitsLocks
TestJdbcWithLocalClusterSpark.testSparkQuery
TestJdbcWithLocalClusterSpark.testTempTable
TestJdbcWithMiniLlap.testLlapInputFormatEndToEnd
TestMTQueries.testMTQueries1
TestMultiSessionsHS2WithLocalClusterSpark.testSparkQuery
TestNegativeCliDriver.alter_notnull_constraint_violation
TestNegativeCliDriver.insert_into_acid_notnull
TestNegativeCliDriver.insert_into_notnull_constraint
TestNegativeCliDriver.insert_multi_into_notnull
TestNegativeCliDriver.insert_overwrite_notnull_constraint
TestNegativeCliDriver.update_notnull_constraint

*Failures*:
TestBlobstoreCliDriver.insert_into_dynamic_partitions
TestBlobstoreCliDriver.insert_overwrite_directory
TestBlobstoreCliDriver.insert_overwrite_dynamic_partitions
TestCliDriver.acid_table_stats
TestCliDriver.auto_sortmerge_join_2
TestCliDriver.avro_alter_table_update_columns
TestCliDriver.avrotblsjoin
TestCliDriver.dbtxnmgr_showlocks
TestCliDriver.orc_merge10
TestCliDriver.orc_schema_evolution_float
TestCliDriver.parquet_ppd_multifiles
TestCliDriver.schema_evol_par_vec_table_dictionary_encoding
TestCliDriver.schema_evol_par_vec_table_non_dictionary_encoding
TestCliDriver.selectindate
TestCliDriver.statsoptimizer
TestCliDriver.vector_bround
TestCliDriver.vector_case_when_1
TestCliDriver.vector_coalesce_2
TestCliDriver.vector_coalesce_3
TestCliDriver.vector_interval_1
TestCliDriver.vectorized_parquet_types
TestMetastoreVersion.testMetastoreVersion
TestMetastoreVersion.testVersionMatching
TestMiniDruidCliDriver.druidkafkamini_basic
TestMiniLlapCliDriver.llap_smb
TestMiniLlapCliDriver.unionDistinct_1
TestMiniTezCliDriver.explainanalyze_5
-TestNegativeCliDriver.authorization_caseinsensitivity-
-TestNegativeCliDriver.authorization_fail_1-
-TestNegativeCliDriver.authorization_grant_table_dup-
-TestNegativeCliDriver.authorization_role_case-
-TestNegativeCliDriver.authorization_role_grant_nosuchrole-
-TestNegativeCliDriver.authorization_table_grant_nosuchrole-
TestNegativeCliDriver.subquery_subquery_chain
TestSessionState.testCreatePath
TestSessionState.testCreatePath
TestSparkStatistics.testSparkStatistics



> Umbrella: branch-3 failing tests
> 
>
> Key: HIVE-19142
> URL: https://issues.apache.org/jira/browse/HIVE-19142
> Project: Hive
>  Issue Type: Test
>Reporter: Vineet Garg
>Priority: Major
>
> This is the list [~alangates] specified on HIVE-19135 which are

[jira] [Commented] (HIVE-19141) TestNegativeCliDriver insert_into_notnull_constraint, insert_into_acid_notnull failing

2018-04-10 Thread Vineet Garg (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432843#comment-16432843
 ] 

Vineet Garg commented on HIVE-19141:


[~vbeshka] Would you mind confirming if this was caused by HIVE-18727?

> TestNegativeCliDriver insert_into_notnull_constraint, 
> insert_into_acid_notnull failing
> --
>
> Key: HIVE-19141
> URL: https://issues.apache.org/jira/browse/HIVE-19141
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0, 3.1.0
>
>
> These tests have been consistently failing for a while. I suspect HIVE-18727 
> has caused these failures. HIVE-18727 changed the code to throw ERROR instead 
> of EXCEPTION if constraints are violated. I guess Negative cli driver doesn't 
> handle errors.
> Following are full list of related failures:
> TestNegativeCliDriver.alter_notnull_constraint_violation
> TestNegativeCliDriver.insert_into_acid_notnull 
> TestNegativeCliDriver.insert_into_notnull_constraint 
> TestNegativeCliDriver.insert_multi_into_notnull 
> TestNegativeCliDriver.insert_overwrite_notnull_constraint 
> TestNegativeCliDriver.update_notnull_constraint



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19138) Results cache: allow queries waiting on pending cache entries to check cache again if pending query fails

2018-04-10 Thread Deepak Jaiswal (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19138?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432853#comment-16432853
 ] 

Deepak Jaiswal commented on HIVE-19138:
---

+1 pending results.

> Results cache: allow queries waiting on pending cache entries to check cache 
> again if pending query fails
> -
>
> Key: HIVE-19138
> URL: https://issues.apache.org/jira/browse/HIVE-19138
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
> Attachments: HIVE-19138.1.patch
>
>
> HIVE-18846 allows the results cache to refer to currently executing queries 
> so that another query can wait for these results to become ready in the 
> results cache. If the pending query fails then Hive will automatically skip 
> the cache and do the full query compilation. Make a fix here so that if the 
> pending query fails, Hive will still try to check the cache again in case the 
> cache has another cached/pending result that can be used to answer the query.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HIVE-19155) Day time saving cause Druid inserts to fail with org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping segments

2018-04-10 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-19155:
--
Status: Patch Available  (was: Open)

> Day time saving cause Druid inserts to fail with 
> org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping 
> segments
> -
>
> Key: HIVE-19155
> URL: https://issues.apache.org/jira/browse/HIVE-19155
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> If you try to insert data around the daylight saving time hour the query 
> fails with following exception
> {code}
> 2018-04-10T11:24:58,836 ERROR [065fdaa2-85f9-4e49-adaf-3dc14d51be90 main] 
> exec.DDLTask: Failed
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping 
> segments [2015-03-08T05:00:00.000Z/2015-03-09T05:00:00.000Z and 
> 2015-03-09T04:00:00.000Z/2015-03-10T04:00:00.000Z] with the same version 
> [2018-04-10T11:24:48.388-07:00]
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:914) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:919) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4831) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:394) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2443) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2114) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1797) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1538) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1532) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:204) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) 
> [hive-cli-3.1.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) 
> [hive-cli-3.1.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) 
> [hive-cli-3.1.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) 
> [hive-cli-3.1.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1455)
>  [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1429) 
> [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:177)
>  [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104) 
> [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver(TestMiniDruidCliDriver.java:59)
>  [test-classes/:?]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_92]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_92]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_92]
> {code}
> You can reproduce this using the following DDL 
> {code}
> create database druid_test;
> use druid_test;
> create table test_table(`timecolumn` timestamp, `userid` string, `num_l` 
> float);
> insert into test_table values ('2015-03-08 00:00:00', 'i1-start', 4);
> insert into test_table values ('2015-03-08 23:59:59', 'i1-end', 1);
> insert into test_table values ('2015-03-09 00:00:00', 'i2-start', 4);
> insert into test_table values ('2015-03-09 

[jira] [Assigned] (HIVE-19155) Day time saving cause Druid inserts to fail with org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping segments

2018-04-10 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra reassigned HIVE-19155:
-


> Day time saving cause Druid inserts to fail with 
> org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping 
> segments
> -
>
> Key: HIVE-19155
> URL: https://issues.apache.org/jira/browse/HIVE-19155
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
>
> If you try to insert data around the daylight saving time hour the query 
> fails with following exception
> {code}
> 2018-04-10T11:24:58,836 ERROR [065fdaa2-85f9-4e49-adaf-3dc14d51be90 main] 
> exec.DDLTask: Failed
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping 
> segments [2015-03-08T05:00:00.000Z/2015-03-09T05:00:00.000Z and 
> 2015-03-09T04:00:00.000Z/2015-03-10T04:00:00.000Z] with the same version 
> [2018-04-10T11:24:48.388-07:00]
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:914) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:919) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4831) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:394) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2443) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2114) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1797) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1538) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1532) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:204) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) 
> [hive-cli-3.1.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) 
> [hive-cli-3.1.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) 
> [hive-cli-3.1.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) 
> [hive-cli-3.1.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1455)
>  [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1429) 
> [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:177)
>  [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104) 
> [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver(TestMiniDruidCliDriver.java:59)
>  [test-classes/:?]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_92]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_92]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_92]
> {code}
> You can reproduce this using the following DDL 
> {code}
> create database druid_test;
> use druid_test;
> create table test_table(`timecolumn` timestamp, `userid` string, `num_l` 
> float);
> insert into test_table values ('2015-03-08 00:00:00', 'i1-start', 4);
> insert into test_table values ('2015-03-08 23:59:59', 'i1-end', 1);
> insert into test_table values ('2015-03-09 00:00:00', 'i2-start', 4);
> insert into test_table values ('2015-03-09 23:59:59', 'i2-end', 1);
> insert 

[jira] [Updated] (HIVE-19155) Day time saving cause Druid inserts to fail with org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping segments

2018-04-10 Thread slim bouguerra (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

slim bouguerra updated HIVE-19155:
--
Attachment: HIVE-19155.patch

> Day time saving cause Druid inserts to fail with 
> org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping 
> segments
> -
>
> Key: HIVE-19155
> URL: https://issues.apache.org/jira/browse/HIVE-19155
> Project: Hive
>  Issue Type: Bug
>  Components: Druid integration
>Reporter: slim bouguerra
>Assignee: slim bouguerra
>Priority: Major
> Attachments: HIVE-19155.patch
>
>
> If you try to insert data around the daylight saving time hour the query 
> fails with following exception
> {code}
> 2018-04-10T11:24:58,836 ERROR [065fdaa2-85f9-4e49-adaf-3dc14d51be90 main] 
> exec.DDLTask: Failed
> org.apache.hadoop.hive.ql.metadata.HiveException: 
> org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping 
> segments [2015-03-08T05:00:00.000Z/2015-03-09T05:00:00.000Z and 
> 2015-03-09T04:00:00.000Z/2015-03-10T04:00:00.000Z] with the same version 
> [2018-04-10T11:24:48.388-07:00]
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:914) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:919) 
> ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4831) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:394) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2443) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2114) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1797) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1538) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1532) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:204) 
> [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) 
> [hive-cli-3.1.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) 
> [hive-cli-3.1.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) 
> [hive-cli-3.1.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) 
> [hive-cli-3.1.0-SNAPSHOT.jar:?]
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1455)
>  [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1429) 
> [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:177)
>  [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104) 
> [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT]
> at 
> org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver(TestMiniDruidCliDriver.java:59)
>  [test-classes/:?]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[?:1.8.0_92]
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[?:1.8.0_92]
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[?:1.8.0_92]
> {code}
> You can reproduce this using the following DDL 
> {code}
> create database druid_test;
> use druid_test;
> create table test_table(`timecolumn` timestamp, `userid` string, `num_l` 
> float);
> insert into test_table values ('2015-03-08 00:00:00', 'i1-start', 4);
> insert into test_table values ('2015-03-08 23:59:59', 'i1-end', 1);
> insert into test_table values ('2015-03-09 00:00:00', 'i2-start', 4);
> insert in

[jira] [Commented] (HIVE-19141) TestNegativeCliDriver insert_into_notnull_constraint, insert_into_acid_notnull failing

2018-04-10 Thread Igor Kryvenko (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19141?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432871#comment-16432871
 ] 

Igor Kryvenko commented on HIVE-19141:
--

[~vgarg] Definitely yes. I have already starting working on it. I agree with 
you that Negative cli driver doesn't handle errors, and i already fixed it. Now 
execution doesn't fails with error. But i encountered some problem that test 
doesn't print {{FAILED: DataConstraintViolationError}} into output file. 
Workaround for it  don't print this message, but in this case we have output 
file without error message, and i think it isn't obvious because we run 
Negative tests and expect some error message in output file.
What do you think about it?

 

> TestNegativeCliDriver insert_into_notnull_constraint, 
> insert_into_acid_notnull failing
> --
>
> Key: HIVE-19141
> URL: https://issues.apache.org/jira/browse/HIVE-19141
> Project: Hive
>  Issue Type: Sub-task
>Reporter: Vineet Garg
>Assignee: Vineet Garg
>Priority: Major
> Fix For: 3.0.0, 3.1.0
>
>
> These tests have been consistently failing for a while. I suspect HIVE-18727 
> has caused these failures. HIVE-18727 changed the code to throw ERROR instead 
> of EXCEPTION if constraints are violated. I guess Negative cli driver doesn't 
> handle errors.
> Following are full list of related failures:
> TestNegativeCliDriver.alter_notnull_constraint_violation
> TestNegativeCliDriver.insert_into_acid_notnull 
> TestNegativeCliDriver.insert_into_notnull_constraint 
> TestNegativeCliDriver.insert_multi_into_notnull 
> TestNegativeCliDriver.insert_overwrite_notnull_constraint 
> TestNegativeCliDriver.update_notnull_constraint



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-18991) Drop database cascade doesn't work with materialized views

2018-04-10 Thread Hive QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-18991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432883#comment-16432883
 ] 

Hive QA commented on HIVE-18991:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
1s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
8s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
14s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
22s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
29s{color} | {color:red} standalone-metastore: The patch generated 6 new + 1210 
unchanged - 9 fixed = 1216 total (was 1219) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
27s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 23m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  findbugs  checkstyle  compile  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 
3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/data/hiveptest/working/yetus_PreCommit-HIVE-Build-10124/dev-support/hive-personality.sh
 |
| git revision | master / 4b8c754 |
| Default Java | 1.8.0_111 |
| checkstyle | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10124/yetus/diff-checkstyle-standalone-metastore.txt
 |
| modules | C: ql standalone-metastore U: . |
| Console output | 
http://104.198.109.242/logs//PreCommit-HIVE-Build-10124/yetus.txt |
| Powered by | Apache Yetushttp://yetus.apache.org |


This message was automatically generated.



> Drop database cascade doesn't work with materialized views
> --
>
> Key: HIVE-18991
> URL: https://issues.apache.org/jira/browse/HIVE-18991
> Project: Hive
>  Issue Type: Bug
>  Components: Materialized views, Metastore
>Affects Versions: 3.0.0
>Reporter: Alan Gates
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
> Attachments: HIVE-18991.01.patch, HIVE-18991.03.patch, 
> HIVE-18991.06.patch, HIVE-18991.07.patch, HIVE-18991.08.patch, 
> HIVE-18991.patch
>
>
> Create a database, add a table and then a materialized view that depends on 
> the table.  Then drop the database with cascade set.  Sometimes this will 
> fail because when HiveMetaStore.drop_database_core goes to drop all of the 
> tables it may drop the base table before the materialized view, which will 
> cause an integrity constraint violation in the RDBMS.  To resolve this that 
> method should change to fetch and drop materialized views before tables.
> cc [~jcamachorodriguez]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-11625) Map instances with null keys are not properly handled for Parquet tables

2018-04-10 Thread Sergio Bilello (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432911#comment-16432911
 ] 

Sergio Bilello commented on HIVE-11625:
---

Any update on this? 

> Map instances with null keys are not properly handled for Parquet tables
> 
>
> Key: HIVE-11625
> URL: https://issues.apache.org/jira/browse/HIVE-11625
> Project: Hive
>  Issue Type: Sub-task
>Affects Versions: 0.14.0, 0.13.1, 1.0.1, 1.1.1, 1.2.1
>Reporter: Cheng Lian
>Priority: Major
>
> Hive allows maps with null keys:
> {code:sql}
> hive> select map(null, 'foo', 1, 'bar', null, 'baz');
> {null:"baz",1:"bar"}
> {code}
> However, when written into Parquet tables, map entries with null as keys are 
> either dropped or cause exceptions. Below is the result of Hive 0.14.0 and 
> 0.13.1:
> {code:sql}
> hive> CREATE TABLE map_test STORED AS PARQUET
> > AS SELECT MAP(null, 'foo', 1, 'bar', null, 'baz');
> ...
> hive> SELECT * from map_test;
> {1:"bar"}
> {code}
> And Hive 1.2.1 throws exception:
> {noformat}
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> Hive Runtime Error while processing writable (null)
>   at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:172)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:50)
>   at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:366)
>   at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
>   at org.apache.hadoop.mapred.Child.main(Child.java:249)
> Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime 
> Error while processing writable (null)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:516)
>   at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:163)
>   ... 8 more
> Caused by: java.lang.RuntimeException: Parquet record is malformed: empty 
> fields are illegal, the field should be ommited completely instead
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:64)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:59)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriteSupport.write(DataWritableWriteSupport.java:31)
>   at 
> parquet.hadoop.InternalParquetRecordWriter.write(InternalParquetRecordWriter.java:121)
>   at 
> parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:123)
>   at parquet.hadoop.ParquetRecordWriter.write(ParquetRecordWriter.java:42)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:111)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.ParquetRecordWriterWrapper.write(ParquetRecordWriterWrapper.java:124)
>   at 
> org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:753)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
>   at 
> org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:88)
>   at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:837)
>   at 
> org.apache.hadoop.hive.ql.exec.TableScanOperator.process(TableScanOperator.java:97)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator$MapOpCtx.forward(MapOperator.java:162)
>   at 
> org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:508)
>   ... 9 more
> Caused by: parquet.io.ParquetEncodingException: empty fields are illegal, the 
> field should be ommited completely instead
>   at 
> parquet.io.MessageColumnIO$MessageColumnIORecordConsumer.endField(MessageColumnIO.java:244)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeMap(DataWritableWriter.java:228)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeValue(DataWritableWriter.java:116)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.writeGroupFields(DataWritableWriter.java:89)
>   at 
> org.apache.hadoop.hive.ql.io.parquet.write.DataWritableWriter.write(DataWritableWriter.java:60)
>   ... 23 more
> java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: 
> Hive Runtime Error while processing writable (null)
>   at org.apache.hadoop.hive.ql.exec.mr.ExecMapper.map(ExecMapper.java:172)
>   at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java

[jira] [Assigned] (HIVE-19156) TestMiniLlapLocalCliDriver.vectorized_dynamic_semijoin_reduction.q is broken

2018-04-10 Thread Jason Dere (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Dere reassigned HIVE-19156:
-


> TestMiniLlapLocalCliDriver.vectorized_dynamic_semijoin_reduction.q is broken
> 
>
> Key: HIVE-19156
> URL: https://issues.apache.org/jira/browse/HIVE-19156
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>
> Looks like this test has been broken for some time



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HIVE-19156) TestMiniLlapLocalCliDriver.vectorized_dynamic_semijoin_reduction.q is broken

2018-04-10 Thread Jason Dere (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16432985#comment-16432985
 ] 

Jason Dere commented on HIVE-19156:
---

Looks like this test was broken by HIVE-1, since ConcurrentHashMap does not 
allow null keys/values, which can occur in this logic (see 
DynamicValueRegistryTez.init())

> TestMiniLlapLocalCliDriver.vectorized_dynamic_semijoin_reduction.q is broken
> 
>
> Key: HIVE-19156
> URL: https://issues.apache.org/jira/browse/HIVE-19156
> Project: Hive
>  Issue Type: Bug
>  Components: Tests
>Reporter: Jason Dere
>Assignee: Jason Dere
>Priority: Major
>
> Looks like this test has been broken for some time



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   3   >