[jira] [Commented] (HIVE-19133) HS2 WebUI phase-wise performance metrics not showing correctly
[ https://issues.apache.org/jira/browse/HIVE-19133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435027#comment-16435027 ] Zoltan Haindrich commented on HIVE-19133: - yes it could be removed; and also PerfLogger.TASK seems to be unused as well > HS2 WebUI phase-wise performance metrics not showing correctly > -- > > Key: HIVE-19133 > URL: https://issues.apache.org/jira/browse/HIVE-19133 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Web UI >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19133.1.patch, HIVE-19133.2.patch, WebUI-compile > time query metrics.png > > > The query specific WebUI metrics (go to drilldown -> performance logging) are > not showing up in the correct phase and are often mixed up. > Attaching screenshot. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19156) TestMiniLlapLocalCliDriver.vectorized_dynamic_semijoin_reduction.q is broken
[ https://issues.apache.org/jira/browse/HIVE-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435036#comment-16435036 ] Hive QA commented on HIVE-19156: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12918474/HIVE-19156.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 117 failed/errored test(s), 13674 tests executed *Failed tests:* {noformat} TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95) [nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,check_constraint_subquery.q,load_wrong_fileformat.q,check_constraint_udtf.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,check_constraint_nonboolean_expr.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_
[jira] [Commented] (HIVE-19048) Initscript errors are ignored
[ https://issues.apache.org/jira/browse/HIVE-19048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435061#comment-16435061 ] Hive QA commented on HIVE-19048: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 10m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10150/dev-support/hive-personality.sh | | git revision | master / 2e027cf | | Default Java | 1.8.0_111 | | modules | C: beeline U: beeline | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10150/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Initscript errors are ignored > - > > Key: HIVE-19048 > URL: https://issues.apache.org/jira/browse/HIVE-19048 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Zoltan Haindrich >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19048.1.patch > > > I've been running some queries for a while when I've noticed that my > initscript has an error; and beeline stops interpreting the initscript after > encountering the first error. > {code} > echo 'invalid;' > init.sql > echo 'select 1;' > s1.sql > beeline -u jdbc:hive2://localhost:1/ -n hive -i init.sql -f s1.sql > [...] > Running init script init.sql > 0: jdbc:hive2://localhost:1/> invalid; > Error: Error while compiling statement: FAILED: ParseException line 1:0 > cannot recognize input near 'invalid' '' '' (state=42000,code=4) > 0: jdbc:hive2://localhost:1/> select 1; > [...] > $ echo $? > 0 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18902) Lower Logging Level for Cleaning Up "local RawStore"
[ https://issues.apache.org/jira/browse/HIVE-18902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bohdan Chupika reassigned HIVE-18902: - Assignee: Bohdan Chupika > Lower Logging Level for Cleaning Up "local RawStore" > > > Key: HIVE-18902 > URL: https://issues.apache.org/jira/browse/HIVE-18902 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 3.0.0, 2.4.0 >Reporter: BELUGA BEHR >Assignee: Bohdan Chupika >Priority: Trivial > Labels: noob > > [https://github.com/apache/hive/blob/7c22d74c8d0eb0650adf6e84e0536127c103e46c/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java#L7756-L7768] > > {code:java} > private static void cleanupRawStore() { > try { > RawStore rs = HMSHandler.getRawStore(); > if (rs != null) { > HMSHandler.logInfo("Cleaning up thread local RawStore..."); > rs.shutdown(); > } > } finally { > HMSHandler handler = HMSHandler.threadLocalHMSHandler.get(); > if (handler != null) { > handler.notifyMetaListenersOnShutDown(); > } > HMSHandler.threadLocalHMSHandler.remove(); > HMSHandler.threadLocalConf.remove(); > HMSHandler.threadLocalModifiedConfig.remove(); > HMSHandler.removeRawStore(); > HMSHandler.logInfo("Done cleaning up thread local RawStore"); > } > } > {code} > {code} > 2018-03-03 17:21:49,832 INFO > org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-21]: 19: > Cleaning up thread local RawStore... > 2018-03-03 17:21:49,834 INFO > org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-21]: 19: Done > cleaning up thread local RawStore > {code} > Not very helpful logging. Please change logging levels to _debug_ or even > _trace_ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18902) Lower Logging Level for Cleaning Up "local RawStore"
[ https://issues.apache.org/jira/browse/HIVE-18902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bohdan Chupika updated HIVE-18902: -- Attachment: HIVE-18902.01.patch > Lower Logging Level for Cleaning Up "local RawStore" > > > Key: HIVE-18902 > URL: https://issues.apache.org/jira/browse/HIVE-18902 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 3.0.0, 2.4.0 >Reporter: BELUGA BEHR >Assignee: Bohdan Chupika >Priority: Trivial > Labels: noob > Attachments: HIVE-18902.01.patch > > > [https://github.com/apache/hive/blob/7c22d74c8d0eb0650adf6e84e0536127c103e46c/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java#L7756-L7768] > > {code:java} > private static void cleanupRawStore() { > try { > RawStore rs = HMSHandler.getRawStore(); > if (rs != null) { > HMSHandler.logInfo("Cleaning up thread local RawStore..."); > rs.shutdown(); > } > } finally { > HMSHandler handler = HMSHandler.threadLocalHMSHandler.get(); > if (handler != null) { > handler.notifyMetaListenersOnShutDown(); > } > HMSHandler.threadLocalHMSHandler.remove(); > HMSHandler.threadLocalConf.remove(); > HMSHandler.threadLocalModifiedConfig.remove(); > HMSHandler.removeRawStore(); > HMSHandler.logInfo("Done cleaning up thread local RawStore"); > } > } > {code} > {code} > 2018-03-03 17:21:49,832 INFO > org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-21]: 19: > Cleaning up thread local RawStore... > 2018-03-03 17:21:49,834 INFO > org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-21]: 19: Done > cleaning up thread local RawStore > {code} > Not very helpful logging. Please change logging levels to _debug_ or even > _trace_ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18902) Lower Logging Level for Cleaning Up "local RawStore"
[ https://issues.apache.org/jira/browse/HIVE-18902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bohdan Chupika updated HIVE-18902: -- Status: Patch Available (was: Open) > Lower Logging Level for Cleaning Up "local RawStore" > > > Key: HIVE-18902 > URL: https://issues.apache.org/jira/browse/HIVE-18902 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 3.0.0, 2.4.0 >Reporter: BELUGA BEHR >Assignee: Bohdan Chupika >Priority: Trivial > Labels: noob > Attachments: HIVE-18902.01.patch > > > [https://github.com/apache/hive/blob/7c22d74c8d0eb0650adf6e84e0536127c103e46c/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java#L7756-L7768] > > {code:java} > private static void cleanupRawStore() { > try { > RawStore rs = HMSHandler.getRawStore(); > if (rs != null) { > HMSHandler.logInfo("Cleaning up thread local RawStore..."); > rs.shutdown(); > } > } finally { > HMSHandler handler = HMSHandler.threadLocalHMSHandler.get(); > if (handler != null) { > handler.notifyMetaListenersOnShutDown(); > } > HMSHandler.threadLocalHMSHandler.remove(); > HMSHandler.threadLocalConf.remove(); > HMSHandler.threadLocalModifiedConfig.remove(); > HMSHandler.removeRawStore(); > HMSHandler.logInfo("Done cleaning up thread local RawStore"); > } > } > {code} > {code} > 2018-03-03 17:21:49,832 INFO > org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-21]: 19: > Cleaning up thread local RawStore... > 2018-03-03 17:21:49,834 INFO > org.apache.hadoop.hive.metastore.HiveMetaStore: [pool-4-thread-21]: 19: Done > cleaning up thread local RawStore > {code} > Not very helpful logging. Please change logging levels to _debug_ or even > _trace_ -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19104) When test MetaStore is started with retry the instances should be independent
[ https://issues.apache.org/jira/browse/HIVE-19104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-19104: -- Attachment: HIVE-19104.4.patch > When test MetaStore is started with retry the instances should be independent > - > > Key: HIVE-19104 > URL: https://issues.apache.org/jira/browse/HIVE-19104 > Project: Hive > Issue Type: Improvement >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-19104.2.patch, HIVE-19104.3.patch, > HIVE-19104.4.patch, HIVE-19104.patch > > > When multiple MetaStore instances are started with > {{MetaStoreTestUtils.startMetaStoreWithRetry}} currently they use the same > JDBC url, and warehouse directory. This can cause problem in the tests -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19104) When test MetaStore is started with retry the instances should be independent
[ https://issues.apache.org/jira/browse/HIVE-19104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435172#comment-16435172 ] Peter Vary commented on HIVE-19104: --- This patch is becoming bigger, so probably a review board is already needed to properly review it: [https://reviews.apache.org/r/66585/] [~stakiar]: Changes in this patch: * Fixed several failed tests revealed by the precommit tests. * Moved the configuration initialization to static as discussed * Checked what can I do with the JDBC, and warehouse directory configuration. They are using System variables (pom.xml based, which can be overwritten by -D flags) to initialize the values in the configuration: {code:java} javax.jdo.option.ConnectionURL jdbc:derby:memory:${test.tmp.dir}/junit_metastore_db;create=true hive.metastore.warehouse.dir ${test.warehouse.dir} {code} Also these values are used by other tests, so we can not remove them from the tests entirely. So I do not think we could remove them from the config file. And for example the MiniHS2 intentionally changes the warehouse directory based on the settings. So in this patch I would not change the jdbc url and the warehouse directory handling, and on the long run I would be more comfortable moving all of the tests to using the MiniHMS for MetaStore handling. Is this ok with you [~stakiar]? Thanks for taking the time and reviewing this stuff! Peter > When test MetaStore is started with retry the instances should be independent > - > > Key: HIVE-19104 > URL: https://issues.apache.org/jira/browse/HIVE-19104 > Project: Hive > Issue Type: Improvement >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-19104.2.patch, HIVE-19104.3.patch, > HIVE-19104.4.patch, HIVE-19104.patch > > > When multiple MetaStore instances are started with > {{MetaStoreTestUtils.startMetaStoreWithRetry}} currently they use the same > JDBC url, and warehouse directory. This can cause problem in the tests -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16041) HCatalog doesn't delete temp _SCRATCH dir when job failed
[ https://issues.apache.org/jira/browse/HIVE-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] yunfei liu updated HIVE-16041: -- Attachment: HIVE-16041.3.patch > HCatalog doesn't delete temp _SCRATCH dir when job failed > -- > > Key: HIVE-16041 > URL: https://issues.apache.org/jira/browse/HIVE-16041 > Project: Hive > Issue Type: Bug > Components: HCatalog >Affects Versions: 2.2.0 >Reporter: yunfei liu >Assignee: yunfei liu >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-16041.1.patch, HIVE-16041.2.patch, > HIVE-16041.3.patch > > > when we use HCatOutputFormat to write to an external partitioned table, a > tmp dir (which starts with "_SCRATCH" ) will appear under table path if the > job failed. > {quote} > drwxr-xr-x - yun hdfs 0 2017-02-27 01:45 > /tmp/hive/_SCRATCH0.31946356159329714 > drwxr-xr-x - yun hdfs 0 2017-02-27 01:51 > /tmp/hive/_SCRATCH0.31946356159329714/c1=1 > drwxr-xr-x - yun hdfs 0 2017-02-27 00:57 /tmp/hive/c1=1 > drwxr-xr-x - yun hdfs 0 2017-02-27 01:28 /tmp/hive/c1=1/c2=2 > -rw-r--r-- 3 yun hdfs 12 2017-02-27 00:57 > /tmp/hive/c1=1/c2=2/part-r-0 > -rw-r--r-- 3 yun hdfs 12 2017-02-27 01:28 > /tmp/hive/c1=1/c2=2/part-r-0_a_1 > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19156) TestMiniLlapLocalCliDriver.vectorized_dynamic_semijoin_reduction.q is broken
[ https://issues.apache.org/jira/browse/HIVE-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435179#comment-16435179 ] Jason Dere commented on HIVE-19156: --- Looks like the golden file output for mergejoin.q may not needed for the ptests, perhaps the stats look different when running on Linux vs Mac. I'll update the patch to exclude the mergejoin.q.out changes. > TestMiniLlapLocalCliDriver.vectorized_dynamic_semijoin_reduction.q is broken > > > Key: HIVE-19156 > URL: https://issues.apache.org/jira/browse/HIVE-19156 > Project: Hive > Issue Type: Sub-task > Components: Tests >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-19156.1.patch > > > Looks like this test has been broken for some time -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-12369) Native Vector GroupBy
[ https://issues.apache.org/jira/browse/HIVE-12369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-12369: Status: In Progress (was: Patch Available) > Native Vector GroupBy > - > > Key: HIVE-12369 > URL: https://issues.apache.org/jira/browse/HIVE-12369 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-12369.01.patch, HIVE-12369.02.patch, > HIVE-12369.05.patch, HIVE-12369.06.patch, HIVE-12369.091.patch, > HIVE-12369.094.patch > > > Implement Native Vector GroupBy using fast hash table technology developed > for Native Vector MapJoin, etc. > Patch is currently limited to a single Long key with a single COUNT > aggregation. Or, a single Long key and no aggregation also known as > duplicate reduction. > 3 new classes introduces that stored the count in the slot table and don't > allocate hash elements: > {noformat} > COUNT(column) VectorGroupByHashLongKeyCountColumnOperator > COUNT(key) VectorGroupByHashLongKeyCountKeyOperator > COUNT(*) VectorGroupByHashLongKeyCountStarOperator > {noformat} > And the duplicate reduction operator a single Long key: > {noformat} > VectorGroupByHashLongKeyDuplicateReductionOperator > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-12369) Native Vector GroupBy
[ https://issues.apache.org/jira/browse/HIVE-12369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-12369: Attachment: HIVE-12369.094.patch > Native Vector GroupBy > - > > Key: HIVE-12369 > URL: https://issues.apache.org/jira/browse/HIVE-12369 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-12369.01.patch, HIVE-12369.02.patch, > HIVE-12369.05.patch, HIVE-12369.06.patch, HIVE-12369.091.patch, > HIVE-12369.094.patch > > > Implement Native Vector GroupBy using fast hash table technology developed > for Native Vector MapJoin, etc. > Patch is currently limited to a single Long key with a single COUNT > aggregation. Or, a single Long key and no aggregation also known as > duplicate reduction. > 3 new classes introduces that stored the count in the slot table and don't > allocate hash elements: > {noformat} > COUNT(column) VectorGroupByHashLongKeyCountColumnOperator > COUNT(key) VectorGroupByHashLongKeyCountKeyOperator > COUNT(*) VectorGroupByHashLongKeyCountStarOperator > {noformat} > And the duplicate reduction operator a single Long key: > {noformat} > VectorGroupByHashLongKeyDuplicateReductionOperator > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-12369) Native Vector GroupBy
[ https://issues.apache.org/jira/browse/HIVE-12369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-12369: Status: Patch Available (was: In Progress) > Native Vector GroupBy > - > > Key: HIVE-12369 > URL: https://issues.apache.org/jira/browse/HIVE-12369 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Attachments: HIVE-12369.01.patch, HIVE-12369.02.patch, > HIVE-12369.05.patch, HIVE-12369.06.patch, HIVE-12369.091.patch, > HIVE-12369.094.patch > > > Implement Native Vector GroupBy using fast hash table technology developed > for Native Vector MapJoin, etc. > Patch is currently limited to a single Long key with a single COUNT > aggregation. Or, a single Long key and no aggregation also known as > duplicate reduction. > 3 new classes introduces that stored the count in the slot table and don't > allocate hash elements: > {noformat} > COUNT(column) VectorGroupByHashLongKeyCountColumnOperator > COUNT(key) VectorGroupByHashLongKeyCountKeyOperator > COUNT(*) VectorGroupByHashLongKeyCountStarOperator > {noformat} > And the duplicate reduction operator a single Long key: > {noformat} > VectorGroupByHashLongKeyDuplicateReductionOperator > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16041) HCatalog doesn't delete temp _SCRATCH dir when job failed
[ https://issues.apache.org/jira/browse/HIVE-16041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435191#comment-16435191 ] yunfei liu commented on HIVE-16041: --- update the patch to fix checkstyle and whitespace problem > HCatalog doesn't delete temp _SCRATCH dir when job failed > -- > > Key: HIVE-16041 > URL: https://issues.apache.org/jira/browse/HIVE-16041 > Project: Hive > Issue Type: Bug > Components: HCatalog >Affects Versions: 2.2.0 >Reporter: yunfei liu >Assignee: yunfei liu >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-16041.1.patch, HIVE-16041.2.patch, > HIVE-16041.3.patch > > > when we use HCatOutputFormat to write to an external partitioned table, a > tmp dir (which starts with "_SCRATCH" ) will appear under table path if the > job failed. > {quote} > drwxr-xr-x - yun hdfs 0 2017-02-27 01:45 > /tmp/hive/_SCRATCH0.31946356159329714 > drwxr-xr-x - yun hdfs 0 2017-02-27 01:51 > /tmp/hive/_SCRATCH0.31946356159329714/c1=1 > drwxr-xr-x - yun hdfs 0 2017-02-27 00:57 /tmp/hive/c1=1 > drwxr-xr-x - yun hdfs 0 2017-02-27 01:28 /tmp/hive/c1=1/c2=2 > -rw-r--r-- 3 yun hdfs 12 2017-02-27 00:57 > /tmp/hive/c1=1/c2=2/part-r-0 > -rw-r--r-- 3 yun hdfs 12 2017-02-27 01:28 > /tmp/hive/c1=1/c2=2/part-r-0_a_1 > {quote} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19048) Initscript errors are ignored
[ https://issues.apache.org/jira/browse/HIVE-19048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435195#comment-16435195 ] Hive QA commented on HIVE-19048: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12918473/HIVE-19048.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 76 failed/errored test(s), 13672 tests executed *Failed tests:* {noformat} TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95) [nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,check_constraint_subquery.q,load_wrong_fileformat.q,check_constraint_udtf.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,check_constraint_nonboolean_expr.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_u
[jira] [Commented] (HIVE-18739) Add support for Export from Acid table
[ https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435202#comment-16435202 ] Hive QA commented on HIVE-18739: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12918479/HIVE-18739.20.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10151/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10151/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10151/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-04-12 08:59:09.407 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-10151/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-04-12 08:59:09.409 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 2e027cf Revert HIVE-18493/HIVE-18806 "Add display escape for CRLF... " TestBeeLineWithArgs changes + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 2e027cf Revert HIVE-18493/HIVE-18806 "Add display escape for CRLF... " TestBeeLineWithArgs changes + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-04-12 08:59:13.252 + rm -rf ../yetus_PreCommit-HIVE-Build-10151 + mkdir ../yetus_PreCommit-HIVE-Build-10151 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-10151 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10151/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java:908 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java' with conflicts. Going to apply patch with: git apply -p0 error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java:908 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java' with conflicts. U ql/src/java/org/apache/hadoop/hive/ql/parse/ImportSemanticAnalyzer.java + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12918479 - PreCommit-HIVE-Build > Add support for Export from Acid table > -- > > Key: HIVE-18739 > URL: https://issues.apache.org/jira/browse/HIVE-18739 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, > HIVE-18739.04.patch, HIVE-18739.06.patch, HIVE-18739.08.patch, > HIVE-18739.09.patch, HIVE-18739.10.patch, HIVE-18739.11.patch, > HIVE-18739.12.patch, HIVE-18739.13.patch, HIVE-18739.14.patch, > HIVE-18739.15.patch, HIVE-18739.16.patch, HIVE-18739.17.patch, > HIVE-18739.19.patch, HIVE-18739.20.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19160) Insert data into decimal column fails with Null Pointer Exception
[ https://issues.apache.org/jira/browse/HIVE-19160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435241#comment-16435241 ] Hive QA commented on HIVE-19160: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 7s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10153/dev-support/hive-personality.sh | | git revision | master / 2e027cf | | Default Java | 1.8.0_111 | | modules | C: ql standalone-metastore U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10153/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Insert data into decimal column fails with Null Pointer Exception > - > > Key: HIVE-19160 > URL: https://issues.apache.org/jira/browse/HIVE-19160 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-19160.1.patch > > > drop table if exists testDecimal; > create table testDecimal > (cIdTINYINT, > cBigIntDECIMAL, > cInt DECIMAL, > cSmallInt DECIMAL, > cTinyint DECIMAL); > insert into testDecimal values > (1, > 1234567890123456789, > 1234567890, > 12345, > 123); > insert into testDecimal values > (2, > 1, > 2, > 3, > 4); > The second insert fails with null pointer exception. > 2018-04-10T15:23:23,080 ERROR [5dba40ef-be49-4187-8a72-afbb46c41ecc main] > metastore.RetryingHMSHandler: java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.api.Decimal.compareTo(Decimal.java:318) > at > org.apache.hadoop.hive.metastore.columnstats.merge.DecimalColumnStatsMerger.merge(DecimalColumnStatsMerger.java:35) > at > org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.mergeColStats(MetaStoreUtils.java:1040) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.set_aggr_stats_for(HiveMetaStore.java:7166) > at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(
[jira] [Commented] (HIVE-19160) Insert data into decimal column fails with Null Pointer Exception
[ https://issues.apache.org/jira/browse/HIVE-19160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435325#comment-16435325 ] Hive QA commented on HIVE-19160: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12918483/HIVE-19160.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 124 failed/errored test(s), 13673 tests executed *Failed tests:* {noformat} TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95) [nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,check_constraint_subquery.q,load_wrong_fileformat.q,check_constraint_udtf.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,check_constraint_nonboolean_expr.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorizati
[jira] [Commented] (HIVE-18609) Results cache invalidation based on ACID table updates
[ https://issues.apache.org/jira/browse/HIVE-18609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435360#comment-16435360 ] Hive QA commented on HIVE-18609: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 46s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 35s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 17s{color} | {color:red} common: The patch generated 1 new + 425 unchanged - 0 fixed = 426 total (was 425) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 47s{color} | {color:red} ql: The patch generated 5 new + 700 unchanged - 0 fixed = 705 total (was 700) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 0s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10157/dev-support/hive-personality.sh | | git revision | master / 2e027cf | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10157/yetus/diff-checkstyle-common.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10157/yetus/diff-checkstyle-ql.txt | | modules | C: common itests ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10157/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Results cache invalidation based on ACID table updates > -- > > Key: HIVE-18609 > URL: https://issues.apache.org/jira/browse/HIVE-18609 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.1.0 >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-18609.1.patch, HIVE-18609.2.patch, > HIVE-18609.3.patch > > > Look into using the materialized view invalidation mechanisms to > automatically invalidate queries in the results cache if the underlying > tables used in the cached queries have been modified. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-12192) Hive should carry out timestamp computations in UTC
[ https://issues.apache.org/jira/browse/HIVE-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-12192: --- Attachment: HIVE-12192.03.patch > Hive should carry out timestamp computations in UTC > --- > > Key: HIVE-12192 > URL: https://issues.apache.org/jira/browse/HIVE-12192 > Project: Hive > Issue Type: Sub-task > Components: Hive >Reporter: Ryan Blue >Assignee: Jesus Camacho Rodriguez >Priority: Major > Labels: timestamp > Attachments: HIVE-12192.01.patch, HIVE-12192.02.patch, > HIVE-12192.03.patch, HIVE-12192.patch > > > Hive currently uses the "local" time of a java.sql.Timestamp to represent the > SQL data type TIMESTAMP WITHOUT TIME ZONE. The purpose is to be able to use > {{Timestamp#getYear()}} and similar methods to implement SQL functions like > {{year}}. > When the SQL session's time zone is a DST zone, such as America/Los_Angeles > that alternates between PST and PDT, there are times that cannot be > represented because the effective zone skips them. > {code} > hive> select TIMESTAMP '2015-03-08 02:10:00.101'; > 2015-03-08 03:10:00.101 > {code} > Using UTC instead of the SQL session time zone as the underlying zone for a > java.sql.Timestamp avoids this bug, while still returning correct values for > {{getYear}} etc. Using UTC as the convenience representation (timestamp > without time zone has no real zone) would make timestamp calculations more > consistent and avoid similar problems in the future. > Notably, this would break the {{unix_timestamp}} UDF that specifies the > result is with respect to ["the default timezone and default > locale"|https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-DateFunctions]. > That function would need to be updated to use the > {{System.getProperty("user.timezone")}} zone. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18609) Results cache invalidation based on ACID table updates
[ https://issues.apache.org/jira/browse/HIVE-18609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435464#comment-16435464 ] Hive QA commented on HIVE-18609: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12918505/HIVE-18609.3.patch {color:green}SUCCESS:{color} +1 due to 10 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 130 failed/errored test(s), 14014 tests executed *Failed tests:* {noformat} TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=37) [bool_literal.q,vectorization_limit.q,autoColumnStats_9.q,udaf_percentile.q,binary_table_colserde.q,nested_complex_additional.q,vector_elt.q,windowing_gby2.q,schema_evol_orc_acid_table_llap_io.q,drop_table_purge.q,vector_when_case_null.q,update_tmp_table.q,orc_ppd_boolean.q,stats_noscan_2.q,schema_evol_text_vecrow_table_llap_io.q,partition_timestamp.q,masking_disablecbo_3.q,nullscript.q,vector_decimal_trailing.q,columnstats_partlvl.q,literal_double.q,udf_index.q,join_cond_pushdown_1.q,schema_evol_text_vec_part_all_complex.q,drop_table2.q,results_cache_invalidation.q,udf8.q,load_dyn_part6.q,inputddl4.q,alter_skewed_table.q] TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestMiniLlapLocalCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=163) [sysdb.q,tez_join.q,vectorization_limit.q,vectorized_rcfile_columnar.q,vector_reuse_scratchcols.q,schema_evol_orc_acid_table_llap_io.q,delete_where_non_partitioned.q,partialdhj.q,orc_merge11.q,schema_evol_orc_acid_table.q,vector_when_case_null.q,orc_merge_incompat_schema.q,vectorization_11.q,update_tmp_table.q,schema_evol_text_vecrow_table_llap_io.q,vector_reduce2.q,vector_interval_mapjoin.q,schema_evol_orc_acidvec_table_update_llap_io.q,tez_joins_explain.q,vector_windowing_order_null.q,vector_decimal_trailing.q,tez_union.q,vector_aggregate_9.q,vector_groupby_grouping_sets_limit.q,materialized_view_rewrite_ssb.q,results_cache_invalidation.q,default_constraint.q,offset_limit.q,llap_acid_fast.q,delete_where_partitioned.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) TestSequenceFileReadWrite - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestTezPerfCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[groupby_groupingset_bug] (batchId=174) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[mergejoin] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=171) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[update_access_time_non_current_db] (batchId=172) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vectorized_dynamic_semijoin_reduction] (batchId=155) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.org.apache.hadoop.hive.cli.TestNegativeCliDriver (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[alter_notnull_constraint_violation] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[default_constraint_invalid_default_value_type] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_acid_notnull] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_into_notnull_constraint] (batchId=95) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_multi_into_notnull] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[insert_overwrite_notnull_constraint] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_bucketmapjoin] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[smb_mapjoin_14] (batchId=96) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[sortmerge_mapjoin_mismatch_1] (batchId=96) org.apache.hadoop.hive.cli.TestNegati
[jira] [Commented] (HIVE-19098) Hive: impossible to insert data in a parquet's table with "union all" in the select query
[ https://issues.apache.org/jira/browse/HIVE-19098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435469#comment-16435469 ] ACOSS commented on HIVE-19098: -- Hello we use: hadoop 2.8.3 and Hive 2.3.2 best regards > Hive: impossible to insert data in a parquet's table with "union all" in the > select query > - > > Key: HIVE-19098 > URL: https://issues.apache.org/jira/browse/HIVE-19098 > Project: Hive > Issue Type: Bug > Components: File Formats, Hive >Affects Versions: 2.3.2 >Reporter: ACOSS >Assignee: Janaki Lahorani >Priority: Minor > > Hello > We have a parquet's table. > We want to insert data in the table by a querie like this: > "insert into my_table select * from my_select_table_1 union all select * from > my_select_table_2" > It's fail with the error: > 2018-04-03 15:49:28,898 FATAL [IPC Server handler 2 on 38465] > org.apache.hadoop.mapred.TaskAttemptListenerImpl: Task: > attempt_1522749003448_0028_m_00_0 - exited : java.io.IOException: > java.lang.reflect.InvocationTargetException > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) > at > org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) > at > org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:271) > at > org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.(HadoopShimsSecure.java:217) > at > org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileInputFormatShim.getRecordReader(HadoopShimsSecure.java:345) > at > org.apache.hadoop.hive.ql.io.CombineHiveInputFormat.getRecordReader(CombineHiveInputFormat.java:695) > at > org.apache.hadoop.mapred.MapTask$TrackedRecordReader.(MapTask.java:169) > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:432) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:175) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:169) > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.initNextRecordReader(HadoopShimsSecure.java:257) > ... 11 more > Caused by: java.lang.NullPointerException > at java.util.AbstractCollection.addAll(AbstractCollection.java:343) > at > org.apache.hadoop.hive.ql.io.parquet.ProjectionPusher.pushProjectionsAndFilters(ProjectionPusher.java:118) > at > org.apache.hadoop.hive.ql.io.parquet.ProjectionPusher.pushProjectionsAndFilters(ProjectionPusher.java:189) > at > org.apache.hadoop.hive.ql.io.parquet.ParquetRecordReaderBase.getSplit(ParquetRecordReaderBase.java:75) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:75) > at > org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.(ParquetRecordReaderWrapper.java:60) > at > org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:75) > at > org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.(CombineHiveRecordReader.java:99) > ... 16 more > > Scenario: > create table t1 (col1 string); > create table t2 (col1 string); > insert into t2 values ('2017'); > insert into t1 values ('2017'); > create table t3 (col1 string) STORED AS PARQUETFILE; > INSERT into t3 select col1 from t1 union all select col1 from t2; -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19133) HS2 WebUI phase-wise performance metrics not showing correctly
[ https://issues.apache.org/jira/browse/HIVE-19133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435498#comment-16435498 ] Hive QA commented on HIVE-19133: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 25s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 40s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 23s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 19m 5s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10158/dev-support/hive-personality.sh | | git revision | master / 2e027cf | | Default Java | 1.8.0_111 | | modules | C: ql service U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10158/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > HS2 WebUI phase-wise performance metrics not showing correctly > -- > > Key: HIVE-19133 > URL: https://issues.apache.org/jira/browse/HIVE-19133 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Web UI >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19133.1.patch, HIVE-19133.2.patch, WebUI-compile > time query metrics.png > > > The query specific WebUI metrics (go to drilldown -> performance logging) are > not showing up in the correct phase and are often mixed up. > Attaching screenshot. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19190) Improve Logging for SemanticException Handling
[ https://issues.apache.org/jira/browse/HIVE-19190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-19190: --- Description: Please improve the logging for queries that fail with SemanticException. For example, when performing an action on a table that does not exist. The most common reason why this happens is that someone fat fingers the table/column name. It is not a system error pe se, but a user validation error. This is not something the cluster administrator should have to worry about. Yet, Hive performs some pretty extreme logging on the matter. I have attached to this JIRA the logging produced by a single submission of the following query: {code:sql} select * from madeup'; {code} For SemanticException exceptions, please print the Exception {{getMessage()}} to the server INFO logging so that the query's life-cycle can be traced, but do not blast ERRORs and stack traces to the log file. was: Please improve the logging for queries that fail with SemanticException. For example, when performing an action on a table that does not exist. The most common reason why this happens is that someone fat fingers the table/column name. It is not a system error pe se, but a user validation error. This is not something the cluster administrator should have to worry about. Yet, Hive performs some pretty extreme logging on the matter. I have attached to this JIRA the logging produced by a single submission of the following query: {code:sql} select * from madeup'; {code} For SemanticException exceptions, please print the Exception {{getMessage()}} to the server INFO logging so that the query's life-cycle can be traced, but do not blast ERRORs and stack traces to the log file. > Improve Logging for SemanticException Handling > -- > > Key: HIVE-19190 > URL: https://issues.apache.org/jira/browse/HIVE-19190 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.3.2 >Reporter: BELUGA BEHR >Priority: Major > Attachments: table_not_found_example.txt > > > Please improve the logging for queries that fail with SemanticException. For > example, when performing an action on a table that does not exist. The most > common reason why this happens is that someone fat fingers the table/column > name. It is not a system error pe se, but a user validation error. This is > not something the cluster administrator should have to worry about. Yet, > Hive performs some pretty extreme logging on the matter. I have attached to > this JIRA the logging produced by a single submission of the following query: > {code:sql} > select * from madeup'; > {code} > > For SemanticException exceptions, please print the Exception {{getMessage()}} > to the server INFO logging so that the query's life-cycle can be traced, but > do not blast ERRORs and stack traces to the log file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19190) Improve Logging for SemanticException Handling
[ https://issues.apache.org/jira/browse/HIVE-19190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-19190: --- Attachment: (was: table_not_found_example.txt) > Improve Logging for SemanticException Handling > -- > > Key: HIVE-19190 > URL: https://issues.apache.org/jira/browse/HIVE-19190 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.3.2 >Reporter: BELUGA BEHR >Priority: Major > > Please improve the logging for queries that fail with SemanticException. For > example, when performing an action on a table that does not exist. The most > common reason why this happens is that someone fat fingers the table/column > name. It is not a system error pe se, but a user validation error. This is > not something the cluster administrator should have to worry about. Yet, > Hive performs some pretty extreme logging on the matter. I have attached to > this JIRA the logging produced by a single submission of the following query: > {code:sql} > select * from madeup'; > {code} > > For SemanticException exceptions, please print the Exception {{getMessage()}} > to the server INFO logging so that the query's life-cycle can be traced, but > do not blast ERRORs and stack traces to the log file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19190) Improve Logging for SemanticException Handling
[ https://issues.apache.org/jira/browse/HIVE-19190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HIVE-19190: --- Attachment: table_not_found_example.txt > Improve Logging for SemanticException Handling > -- > > Key: HIVE-19190 > URL: https://issues.apache.org/jira/browse/HIVE-19190 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0, 2.3.2 >Reporter: BELUGA BEHR >Priority: Major > Attachments: table_not_found_example.txt > > > Please improve the logging for queries that fail with SemanticException. For > example, when performing an action on a table that does not exist. The most > common reason why this happens is that someone fat fingers the table/column > name. It is not a system error pe se, but a user validation error. This is > not something the cluster administrator should have to worry about. Yet, > Hive performs some pretty extreme logging on the matter. I have attached to > this JIRA the logging produced by a single submission of the following query: > {code:sql} > select * from madeup'; > {code} > > For SemanticException exceptions, please print the Exception {{getMessage()}} > to the server INFO logging so that the query's life-cycle can be traced, but > do not blast ERRORs and stack traces to the log file. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19104) When test MetaStore is started with retry the instances should be independent
[ https://issues.apache.org/jira/browse/HIVE-19104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435566#comment-16435566 ] Sahil Takiar commented on HIVE-19104: - Ok, what about just returning a custom {{javax.jdo.option.ConnectionURL}} from {{MetaStoreTestUtils}} rather than relying on the value in {{data/conf/hive-site.xml}}. This would avoid the need to do the string replacement. Rest of the patch LGTM. > When test MetaStore is started with retry the instances should be independent > - > > Key: HIVE-19104 > URL: https://issues.apache.org/jira/browse/HIVE-19104 > Project: Hive > Issue Type: Improvement >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-19104.2.patch, HIVE-19104.3.patch, > HIVE-19104.4.patch, HIVE-19104.patch > > > When multiple MetaStore instances are started with > {{MetaStoreTestUtils.startMetaStoreWithRetry}} currently they use the same > JDBC url, and warehouse directory. This can cause problem in the tests -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19131) DecimalColumnStatsMergerTest comparison review
[ https://issues.apache.org/jira/browse/HIVE-19131?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Bodor reassigned HIVE-19131: --- Assignee: Laszlo Bodor > DecimalColumnStatsMergerTest comparison review > -- > > Key: HIVE-19131 > URL: https://issues.apache.org/jira/browse/HIVE-19131 > Project: Hive > Issue Type: Bug >Reporter: Laszlo Bodor >Assignee: Laszlo Bodor >Priority: Major > > DecimalColumnStatsMergerTest has a strange comparison logic, which needs to > be reviewed. > Regarding low and high values, it uses compareTo with the same direction, > which seems to be incorrect: old.compareTo(new) > 0 -> pick old value in both > cases > {code:java} > Decimal lowValue = aggregateData.getLowValue() != null && > (aggregateData.getLowValue().compareTo(newData.getLowValue()) > 0) ? > aggregateData .getLowValue() : newData.getLowValue(); > aggregateData.setLowValue(lowValue); > Decimal highValue = aggregateData.getHighValue() != null && > (aggregateData.getHighValue().compareTo(newData.getHighValue()) > 0) ? > aggregateData .getHighValue() : newData.getHighValue(); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18883) Add findbugs to yetus pre-commit checks
[ https://issues.apache.org/jira/browse/HIVE-18883?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435618#comment-16435618 ] Sahil Takiar commented on HIVE-18883: - Looks like this needs to be deployed manually, on {{hiveptest-server-upstream}} the Hive repo under {{/home/hiveptest/hive}} doesn't have this change. Seems this repo has to be manually updated. [~pvary], [~szita] how do we usually deploy these types of changes? I noticed the most recent change is HIVE-18706. Is simply doing a {{git pull}} sufficient? > Add findbugs to yetus pre-commit checks > --- > > Key: HIVE-18883 > URL: https://issues.apache.org/jira/browse/HIVE-18883 > Project: Hive > Issue Type: Sub-task > Components: Testing Infrastructure >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-18883.1.patch, HIVE-18883.2.patch > > > We should enable FindBugs for our YETUS pre-commit checks, this will help > overall code quality and should decrease the overall number of bugs in Hive. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18862) qfiles: prepare .q files for using datasets
[ https://issues.apache.org/jira/browse/HIVE-18862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Laszlo Bodor updated HIVE-18862: Attachment: HIVE-18862.08.patch > qfiles: prepare .q files for using datasets > --- > > Key: HIVE-18862 > URL: https://issues.apache.org/jira/browse/HIVE-18862 > Project: Hive > Issue Type: Sub-task >Reporter: Laszlo Bodor >Assignee: Laszlo Bodor >Priority: Major > Attachments: HIVE-18862.01.patch, HIVE-18862.02.patch, > HIVE-18862.03.patch, HIVE-18862.04.patch, HIVE-18862.05.patch, > HIVE-18862.06.patch, HIVE-18862.07.patch, HIVE-18862.08.patch > > > # Parse .q files for source table usage > # Add needed dataset annotations > # Remove create table statements from "q_test_init.sql" like files > # Handle oncoming issues related to dataset introduction -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19133) HS2 WebUI phase-wise performance metrics not showing correctly
[ https://issues.apache.org/jira/browse/HIVE-19133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435629#comment-16435629 ] Hive QA commented on HIVE-19133: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12918511/HIVE-19133.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 78 failed/errored test(s), 13672 tests executed *Failed tests:* {noformat} TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95) [nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,check_constraint_subquery.q,load_wrong_fileformat.q,check_constraint_udtf.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,check_constraint_nonboolean_expr.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_u
[jira] [Commented] (HIVE-19131) DecimalColumnStatsMergerTest comparison review
[ https://issues.apache.org/jira/browse/HIVE-19131?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435636#comment-16435636 ] Zoltan Haindrich commented on HIVE-19131: - We've just take a closer look at the comparison itself with [~abstractdog]; and it seems like even the compareTo is problematic... the compareTo method belongs to {{Decimal}} ; but it seems it [compares the unscaled value before the scale|https://github.com/apache/hive/blob/2e027cff7a064b64019b2a2df54a614b018be15f/standalone-metastore/src/gen/thrift/gen-javabean/org/apache/hadoop/hive/metastore/api/Decimal.java#L329]; which could probably lead to interesting results... > DecimalColumnStatsMergerTest comparison review > -- > > Key: HIVE-19131 > URL: https://issues.apache.org/jira/browse/HIVE-19131 > Project: Hive > Issue Type: Bug >Reporter: Laszlo Bodor >Assignee: Laszlo Bodor >Priority: Major > > DecimalColumnStatsMergerTest has a strange comparison logic, which needs to > be reviewed. > Regarding low and high values, it uses compareTo with the same direction, > which seems to be incorrect: old.compareTo(new) > 0 -> pick old value in both > cases > {code:java} > Decimal lowValue = aggregateData.getLowValue() != null && > (aggregateData.getLowValue().compareTo(newData.getLowValue()) > 0) ? > aggregateData .getLowValue() : newData.getLowValue(); > aggregateData.setLowValue(lowValue); > Decimal highValue = aggregateData.getHighValue() != null && > (aggregateData.getHighValue().compareTo(newData.getHighValue()) > 0) ? > aggregateData .getHighValue() : newData.getHighValue(); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18862) qfiles: prepare .q files for using datasets
[ https://issues.apache.org/jira/browse/HIVE-18862?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435674#comment-16435674 ] Zoltan Haindrich commented on HIVE-18862: - +1 pending tests > qfiles: prepare .q files for using datasets > --- > > Key: HIVE-18862 > URL: https://issues.apache.org/jira/browse/HIVE-18862 > Project: Hive > Issue Type: Sub-task >Reporter: Laszlo Bodor >Assignee: Laszlo Bodor >Priority: Major > Attachments: HIVE-18862.01.patch, HIVE-18862.02.patch, > HIVE-18862.03.patch, HIVE-18862.04.patch, HIVE-18862.05.patch, > HIVE-18862.06.patch, HIVE-18862.07.patch, HIVE-18862.08.patch > > > # Parse .q files for source table usage > # Add needed dataset annotations > # Remove create table statements from "q_test_init.sql" like files > # Handle oncoming issues related to dataset introduction -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19009) Retain and use runtime statistics thru out a session
[ https://issues.apache.org/jira/browse/HIVE-19009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-19009: Attachment: HIVE-19009.04.patch > Retain and use runtime statistics thru out a session > > > Key: HIVE-19009 > URL: https://issues.apache.org/jira/browse/HIVE-19009 > Project: Hive > Issue Type: Sub-task >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19009.01.patch, HIVE-19009.02.patch, > HIVE-19009.03.patch, HIVE-19009.04.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19009) Retain and use runtime statistics thru out a session
[ https://issues.apache.org/jira/browse/HIVE-19009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-19009: Attachment: (was: HIVE-19009.04.patch) > Retain and use runtime statistics thru out a session > > > Key: HIVE-19009 > URL: https://issues.apache.org/jira/browse/HIVE-19009 > Project: Hive > Issue Type: Sub-task >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19009.01.patch, HIVE-19009.02.patch, > HIVE-19009.03.patch, HIVE-19009.04.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19009) Retain and use runtime statistics thru out a session
[ https://issues.apache.org/jira/browse/HIVE-19009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-19009: Attachment: HIVE-19009.04.patch > Retain and use runtime statistics thru out a session > > > Key: HIVE-19009 > URL: https://issues.apache.org/jira/browse/HIVE-19009 > Project: Hive > Issue Type: Sub-task >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19009.01.patch, HIVE-19009.02.patch, > HIVE-19009.03.patch, HIVE-19009.04.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19009) Retain and use runtime statistics during hs2 lifetime
[ https://issues.apache.org/jira/browse/HIVE-19009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-19009: Summary: Retain and use runtime statistics during hs2 lifetime (was: Retain and use runtime statistics thru out a session) > Retain and use runtime statistics during hs2 lifetime > - > > Key: HIVE-19009 > URL: https://issues.apache.org/jira/browse/HIVE-19009 > Project: Hive > Issue Type: Sub-task >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19009.01.patch, HIVE-19009.02.patch, > HIVE-19009.03.patch, HIVE-19009.04.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19009) Retain and use runtime statistics during hs2 lifetime
[ https://issues.apache.org/jira/browse/HIVE-19009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435726#comment-16435726 ] Zoltan Haindrich commented on HIVE-19009: - [~ashutoshc] I've changed the patch to use hs2 level storage; however I'm not able to upload the patch to the reviewboard https://github.com/apache/hive/compare/master...kgyrtkirk:HIVE-19009-runtime-session > Retain and use runtime statistics during hs2 lifetime > - > > Key: HIVE-19009 > URL: https://issues.apache.org/jira/browse/HIVE-19009 > Project: Hive > Issue Type: Sub-task >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-19009.01.patch, HIVE-19009.02.patch, > HIVE-19009.03.patch, HIVE-19009.04.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19156) TestMiniLlapLocalCliDriver.vectorized_dynamic_semijoin_reduction.q is broken
[ https://issues.apache.org/jira/browse/HIVE-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-19156: Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) Reverted mergejoin.q.out changes. Pushed rest to master. Thanks, Jason! > TestMiniLlapLocalCliDriver.vectorized_dynamic_semijoin_reduction.q is broken > > > Key: HIVE-19156 > URL: https://issues.apache.org/jira/browse/HIVE-19156 > Project: Hive > Issue Type: Sub-task > Components: Tests >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19156.1.patch > > > Looks like this test has been broken for some time -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19170) Fix TestMiniDruidKafkaCliDriver
[ https://issues.apache.org/jira/browse/HIVE-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435825#comment-16435825 ] Hive QA commented on HIVE-19170: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 58s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 15s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10160/dev-support/hive-personality.sh | | git revision | master / a2dd09f | | modules | C: itests U: itests | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10160/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix TestMiniDruidKafkaCliDriver > --- > > Key: HIVE-19170 > URL: https://issues.apache.org/jira/browse/HIVE-19170 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-19170.patch > > > added in HIVE-18976 > the property key {{druid.kafka.query.files}} doesn't exists in > testconfiguration.properties. > because of this TestMiniDruidKafkaCliDriver tries to run *all* qtests...which > time out...and produce > {code} > TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed > out) (batchId=252) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18739) Add support for Export from Acid table
[ https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18739: -- Attachment: HIVE-18739.21.patch > Add support for Export from Acid table > -- > > Key: HIVE-18739 > URL: https://issues.apache.org/jira/browse/HIVE-18739 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-18739.01.patch, HIVE-18739.04.patch, > HIVE-18739.04.patch, HIVE-18739.06.patch, HIVE-18739.08.patch, > HIVE-18739.09.patch, HIVE-18739.10.patch, HIVE-18739.11.patch, > HIVE-18739.12.patch, HIVE-18739.13.patch, HIVE-18739.14.patch, > HIVE-18739.15.patch, HIVE-18739.16.patch, HIVE-18739.17.patch, > HIVE-18739.19.patch, HIVE-18739.20.patch, HIVE-18739.21.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19167) Map data type doesn't keep the order of the key/values pairs as read (Part 2, The Sequel or SQL)
[ https://issues.apache.org/jira/browse/HIVE-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19167: Status: In Progress (was: Patch Available) > Map data type doesn't keep the order of the key/values pairs as read (Part 2, > The Sequel or SQL) > --- > > Key: HIVE-19167 > URL: https://issues.apache.org/jira/browse/HIVE-19167 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.1.0 > > Attachments: HIVE-19167.01.patch, HIVE-19167.02.patch > > > HIVE-19116: "Vectorization: Vector Map data type doesn't keep the order of > the key/values pairs as read" didn't fix all the places where HashMap is used > instead of LinkedHashMap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19167) Map data type doesn't keep the order of the key/values pairs as read (Part 2, The Sequel or SQL)
[ https://issues.apache.org/jira/browse/HIVE-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19167: Attachment: HIVE-19167.02.patch > Map data type doesn't keep the order of the key/values pairs as read (Part 2, > The Sequel or SQL) > --- > > Key: HIVE-19167 > URL: https://issues.apache.org/jira/browse/HIVE-19167 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.1.0 > > Attachments: HIVE-19167.01.patch, HIVE-19167.02.patch > > > HIVE-19116: "Vectorization: Vector Map data type doesn't keep the order of > the key/values pairs as read" didn't fix all the places where HashMap is used > instead of LinkedHashMap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19167) Map data type doesn't keep the order of the key/values pairs as read (Part 2, The Sequel or SQL)
[ https://issues.apache.org/jira/browse/HIVE-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-19167: Status: Patch Available (was: In Progress) > Map data type doesn't keep the order of the key/values pairs as read (Part 2, > The Sequel or SQL) > --- > > Key: HIVE-19167 > URL: https://issues.apache.org/jira/browse/HIVE-19167 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.1.0 > > Attachments: HIVE-19167.01.patch, HIVE-19167.02.patch > > > HIVE-19116: "Vectorization: Vector Map data type doesn't keep the order of > the key/values pairs as read" didn't fix all the places where HashMap is used > instead of LinkedHashMap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19104) When test MetaStore is started with retry the instances should be independent
[ https://issues.apache.org/jira/browse/HIVE-19104?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-19104: -- Attachment: HIVE-19104.5.patch > When test MetaStore is started with retry the instances should be independent > - > > Key: HIVE-19104 > URL: https://issues.apache.org/jira/browse/HIVE-19104 > Project: Hive > Issue Type: Improvement >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-19104.2.patch, HIVE-19104.3.patch, > HIVE-19104.4.patch, HIVE-19104.5.patch, HIVE-19104.patch > > > When multiple MetaStore instances are started with > {{MetaStoreTestUtils.startMetaStoreWithRetry}} currently they use the same > JDBC url, and warehouse directory. This can cause problem in the tests -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19104) When test MetaStore is started with retry the instances should be independent
[ https://issues.apache.org/jira/browse/HIVE-19104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435875#comment-16435875 ] Peter Vary commented on HIVE-19104: --- Patch 5 uses a custom jdbc url, and removed the jdbc url generation from the MiniHS2 code > When test MetaStore is started with retry the instances should be independent > - > > Key: HIVE-19104 > URL: https://issues.apache.org/jira/browse/HIVE-19104 > Project: Hive > Issue Type: Improvement >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-19104.2.patch, HIVE-19104.3.patch, > HIVE-19104.4.patch, HIVE-19104.5.patch, HIVE-19104.patch > > > When multiple MetaStore instances are started with > {{MetaStoreTestUtils.startMetaStoreWithRetry}} currently they use the same > JDBC url, and warehouse directory. This can cause problem in the tests -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19175) TestMiniLlapLocalCliDriver.testCliDriver update_access_time_non_current_db failing
[ https://issues.apache.org/jira/browse/HIVE-19175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-19175: Resolution: Fixed Fix Version/s: (was: 3.0.0) 3.1.0 Status: Resolved (was: Patch Available) Pushed to master. > TestMiniLlapLocalCliDriver.testCliDriver update_access_time_non_current_db > failing > -- > > Key: HIVE-19175 > URL: https://issues.apache.org/jira/browse/HIVE-19175 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19175.1.patch, HIVE-19175.2.patch > > > Caused by HIVE-18060. Instead of generating golden file under > clientpositive/llap it is under clientpositive. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19170) Fix TestMiniDruidKafkaCliDriver
[ https://issues.apache.org/jira/browse/HIVE-19170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435990#comment-16435990 ] Hive QA commented on HIVE-19170: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12918562/HIVE-19170.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 153 failed/errored test(s), 13799 tests executed *Failed tests:* {noformat} TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=96) [udf_invalid.q,authorization_uri_export.q,default_constraint_complex_default_value.q,druid_datasource2.q,check_constraint_max_length.q,view_update.q,default_partition_name.q,authorization_public_create.q,load_wrong_fileformat_rc_seq.q,default_constraint_invalid_type.q,altern1.q,describe_xpath1.q,drop_view_failure2.q,temp_table_rename.q,invalid_select_column_with_subquery.q,udf_trunc_error1.q,insert_view_failure.q,dbtxnmgr_nodbunlock.q,authorization_show_columns.q,cte_recursion.q,merge_constraint_notnull.q,load_part_nospec.q,clusterbyorderby.q,orc_type_promotion2.q,ctas_noperm_loc.q,udf_min.q,udf_instr_wrong_args_len.q,invalid_create_tbl2.q,part_col_complex_type.q,authorization_drop_db_empty.q,smb_mapjoin_14.q,subquery_scalar_multi_rows.q,alter_partition_coltype_2columns.q,subquery_corr_in_agg.q,insert_overwrite_notnull_constraint.q,authorization_show_grant_otheruser_wtab.q,regex_col_groupby.q,ptf_negative_DuplicateWindowAlias.q,exim_22_export_authfail.q,authorization_insert_noinspriv.q,udf_likeany_wrong1.q,groupby_key.q,ambiguous_col.q,groupby3_multi_distinct.q,authorization_alter_drop_ptn.q,invalid_cast_from_binary_5.q,show_create_table_does_not_exist.q,invalid_select_column.q,exim_20_managed_location_over_existing.q,interval_3.q,authorization_compile.q,join35.q,udf_concat_ws_wrong3.q,create_or_replace_view8.q,create_external_with_notnull_constraint.q,split_sample_out_of_range.q,materialized_view_no_transactional_rewrite.q,authorization_show_grant_otherrole.q,create_with_constraints_duplicate_name.q,invalid_stddev_samp_syntax.q,authorization_view_disable_cbo_7.q,autolocal1.q,avro_non_nullable_union.q,load_orc_negative_part.q,drop_view_failure1.q,columnstats_partlvl_invalid_values_autogather.q,exim_13_nonnative_import.q,alter_table_wrong_regex.q,udf_next_day_error_2.q,authorization_select.q,udf_trunc_error2.q,authorization_view_7.q,udf_format_number_wrong5.q,touch2.q,orc_type_promotion1.q,lateral_view_alias.q,show_tables_bad_db1.q,unset_table_property.q,alter_non_native.q,nvl_mismatch_type.q,load_orc_negative3.q,authorization_create_role_no_admin.q,invalid_distinct1.q,authorization_grant_server.q,orc_type_promotion3_acid.q,hms_using_serde_alter_table_update_columns.q,show_tables_bad1.q,macro_unused_parameter.q,drop_invalid_constraint3.q,drop_partition_filter_failure.q,char_pad_convert_fail3.q,exim_23_import_exist_authfail.q,drop_invalid_constraint4.q,authorization_create_macro1.q,archive1.q,subquery_multiple_cols_in_select.q,change_hive_hdfs_session_path.q,udf_trunc_error3.q,invalid_variance_syntax.q,authorization_truncate_2.q,invalid_avg_syntax.q,invalid_select_column_with_tablename.q,mm_truncate_cols.q,groupby_grouping_sets1.q,druid_location.q,groupby2_multi_distinct.q,authorization_sba_drop_table.q,dynamic_partitions_with_whitelist.q,compare_string_bigint_2.q,udf_greatest_error_2.q,authorization_view_6.q,show_tablestatus.q,duplicate_alias_in_transform_schema.q,create_with_fk_uk_same_tab.q,udtf_not_supported3.q,alter_table_constraint_invalid_fk_col2.q,udtf_not_supported1.q,dbtxnmgr_notableunlock.q,ptf_negative_InvalidValueBoundary.q,alter_table_constraint_duplicate_pk.q,udf_printf_wrong4.q,create_view_failure9.q,udf_elt_wrong_type.q,selectDistinctStarNeg_1.q,invalid_mapjoin1.q,load_stored_as_dirs.q,input1.q,udf_sort_array_wrong1.q,invalid_distinct2.q,invalid_select_fn.q,authorization_role_grant_otherrole.q,archive4.q,load_nonpart_authfail.q,recursive_view.q,authorization_view_disable_cbo_1.q,desc_failure4.q,create_not_acid.q,udf_sort_array_wrong3.q,char_pad_convert_fail0.q,udf_map_values_arg_type.q,alter_view_failure6_2.q,alter_partition_change_col_nonexist.q,update_non_acid_table.q,authorization_view_disable_cbo_5.q,ct_noperm_loc.q,interval_1.q,authorization_show_grant_otheruser_all.q,authorization_view_2.q,show_tables_bad2.q,groupby_rollup2.q,truncate_column_seqfile.q,create_view_failure5.q,authorization_create_view.q,ptf_window_boundaries.q,ctasnullcol.q,input_part0_neg_2.q,create_or_replace_view1.q,udf_max.q,exim_01_nonpart_over_loaded.q,msck_repair_1.q,orc_change_fileformat_acid.q,udf_nonexistent_resource.q,msck_repair_3.q,exim_19_external_over_existing.q,serde_re
[jira] [Commented] (HIVE-19172) NPE due to null EnvironmentContext in DDLTask
[ https://issues.apache.org/jira/browse/HIVE-19172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436036#comment-16436036 ] Hive QA commented on HIVE-19172: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10161/dev-support/hive-personality.sh | | git revision | master / b7c64b1 | | Default Java | 1.8.0_111 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10161/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > NPE due to null EnvironmentContext in DDLTask > - > > Key: HIVE-19172 > URL: https://issues.apache.org/jira/browse/HIVE-19172 > Project: Hive > Issue Type: Task >Reporter: Nishant Bangarwa >Assignee: Nishant Bangarwa >Priority: Major > Attachments: HIVE-19172.patch > > > Stack Trace - > {code} > 2018-04-11T02:52:51,386 ERROR [5f2e24bf-ac93-4977-84fe-aa2c5f674ea4 main] > exec.DDLTask: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.DDLTask.alterTable(DDLTask.java:3539) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:392) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1987) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1667) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1414) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19147) Fix PerfCliDrivers: Tpcds30T missed CAT_NAME change
[ https://issues.apache.org/jira/browse/HIVE-19147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436057#comment-16436057 ] Vineet Garg commented on HIVE-19147: Pushed to branch-3 > Fix PerfCliDrivers: Tpcds30T missed CAT_NAME change > --- > > Key: HIVE-19147 > URL: https://issues.apache.org/jira/browse/HIVE-19147 > Project: Hive > Issue Type: Sub-task >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19147.01.patch > > > it seems the baked metastore dump misses the CAT_NAME field added by some > recent metastore change -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19175) TestMiniLlapLocalCliDriver.testCliDriver update_access_time_non_current_db failing
[ https://issues.apache.org/jira/browse/HIVE-19175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19175: --- Fix Version/s: (was: 3.1.0) 3.0.0 > TestMiniLlapLocalCliDriver.testCliDriver update_access_time_non_current_db > failing > -- > > Key: HIVE-19175 > URL: https://issues.apache.org/jira/browse/HIVE-19175 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19175.1.patch, HIVE-19175.2.patch > > > Caused by HIVE-18060. Instead of generating golden file under > clientpositive/llap it is under clientpositive. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19175) TestMiniLlapLocalCliDriver.testCliDriver update_access_time_non_current_db failing
[ https://issues.apache.org/jira/browse/HIVE-19175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436059#comment-16436059 ] Vineet Garg commented on HIVE-19175: Pushed to branch-3 to stabilize tests > TestMiniLlapLocalCliDriver.testCliDriver update_access_time_non_current_db > failing > -- > > Key: HIVE-19175 > URL: https://issues.apache.org/jira/browse/HIVE-19175 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19175.1.patch, HIVE-19175.2.patch > > > Caused by HIVE-18060. Instead of generating golden file under > clientpositive/llap it is under clientpositive. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19156) TestMiniLlapLocalCliDriver.vectorized_dynamic_semijoin_reduction.q is broken
[ https://issues.apache.org/jira/browse/HIVE-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19156: --- Fix Version/s: (was: 3.1.0) 3.0.0 > TestMiniLlapLocalCliDriver.vectorized_dynamic_semijoin_reduction.q is broken > > > Key: HIVE-19156 > URL: https://issues.apache.org/jira/browse/HIVE-19156 > Project: Hive > Issue Type: Sub-task > Components: Tests >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19156.1.patch > > > Looks like this test has been broken for some time -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19156) TestMiniLlapLocalCliDriver.vectorized_dynamic_semijoin_reduction.q is broken
[ https://issues.apache.org/jira/browse/HIVE-19156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436061#comment-16436061 ] Vineet Garg commented on HIVE-19156: Pushed to branch-3 > TestMiniLlapLocalCliDriver.vectorized_dynamic_semijoin_reduction.q is broken > > > Key: HIVE-19156 > URL: https://issues.apache.org/jira/browse/HIVE-19156 > Project: Hive > Issue Type: Sub-task > Components: Tests >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19156.1.patch > > > Looks like this test has been broken for some time -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19104) When test MetaStore is started with retry the instances should be independent
[ https://issues.apache.org/jira/browse/HIVE-19104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436062#comment-16436062 ] Sahil Takiar commented on HIVE-19104: - +1 pending Hive QA > When test MetaStore is started with retry the instances should be independent > - > > Key: HIVE-19104 > URL: https://issues.apache.org/jira/browse/HIVE-19104 > Project: Hive > Issue Type: Improvement >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-19104.2.patch, HIVE-19104.3.patch, > HIVE-19104.4.patch, HIVE-19104.5.patch, HIVE-19104.patch > > > When multiple MetaStore instances are started with > {{MetaStoreTestUtils.startMetaStoreWithRetry}} currently they use the same > JDBC url, and warehouse directory. This can cause problem in the tests -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19133) HS2 WebUI phase-wise performance metrics not showing correctly
[ https://issues.apache.org/jira/browse/HIVE-19133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-19133: Attachment: HIVE-19133.3.patch > HS2 WebUI phase-wise performance metrics not showing correctly > -- > > Key: HIVE-19133 > URL: https://issues.apache.org/jira/browse/HIVE-19133 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Web UI >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19133.1.patch, HIVE-19133.2.patch, > HIVE-19133.3.patch, WebUI-compile time query metrics.png > > > The query specific WebUI metrics (go to drilldown -> performance logging) are > not showing up in the correct phase and are often mixed up. > Attaching screenshot. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19133) HS2 WebUI phase-wise performance metrics not showing correctly
[ https://issues.apache.org/jira/browse/HIVE-19133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-19133: Status: In Progress (was: Patch Available) > HS2 WebUI phase-wise performance metrics not showing correctly > -- > > Key: HIVE-19133 > URL: https://issues.apache.org/jira/browse/HIVE-19133 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Web UI >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19133.1.patch, HIVE-19133.2.patch, > HIVE-19133.3.patch, WebUI-compile time query metrics.png > > > The query specific WebUI metrics (go to drilldown -> performance logging) are > not showing up in the correct phase and are often mixed up. > Attaching screenshot. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19133) HS2 WebUI phase-wise performance metrics not showing correctly
[ https://issues.apache.org/jira/browse/HIVE-19133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharathkrishna Guruvayoor Murali updated HIVE-19133: Status: Patch Available (was: In Progress) Removed DRIVER_RUN as it is redundant now. Removed unused "task" method in PerfLogger. Adding unit test to check if maps are updated in the correct phase. > HS2 WebUI phase-wise performance metrics not showing correctly > -- > > Key: HIVE-19133 > URL: https://issues.apache.org/jira/browse/HIVE-19133 > Project: Hive > Issue Type: Bug > Components: HiveServer2, Web UI >Reporter: Bharathkrishna Guruvayoor Murali >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19133.1.patch, HIVE-19133.2.patch, > HIVE-19133.3.patch, WebUI-compile time query metrics.png > > > The query specific WebUI metrics (go to drilldown -> performance logging) are > not showing up in the correct phase and are often mixed up. > Attaching screenshot. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19164) TestMetastoreVersion failures
[ https://issues.apache.org/jira/browse/HIVE-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436075#comment-16436075 ] Alan Gates commented on HIVE-19164: --- I do not believe this is an issue with metastore changes. The config file being handed to the HiveMetaStore.getMSForConf has schema verification set to true. However, the config file being handed to Driver in TestMetastoreVersion.testMetastoreVersion has it set to false. This means the driver is somewhere not passing the config file it is handed down to HiveMetaStore. I notice as well that this test passes if run in isolation, so I'm guessing that somewhere the Driver or Hive (the class) isn't cleaning up an old config and is reusing it. > TestMetastoreVersion failures > - > > Key: HIVE-19164 > URL: https://issues.apache.org/jira/browse/HIVE-19164 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Vineet Garg >Assignee: Alan Gates >Priority: Major > Fix For: 3.0.0 > > > Following tests are failing consistently and are reproducible on master: > * testVersionMatching > * testMetastoreVersion > I tried debugging it and found that ObjectStore.getMSSchemaVersion() throws > an exception {{No matching version found}}. > To fetch schema version this method executes {code:sql} SELECT FROM > org.apache.hadoop.hive.metastore.model.MVersionTable {code} but for whatever > reason execution returns empty result set resulting in the exception. Both > test failures are due to this exception. I tried debugging the query > execution but didn't really go nowhere with it. I suspect this could be due > to recent metastore changes. I tried reproducing this outside test but with > no success. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18525) Add explain plan to Hive on Spark Web UI
[ https://issues.apache.org/jira/browse/HIVE-18525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436079#comment-16436079 ] Aihua Xu commented on HIVE-18525: - +1. > Add explain plan to Hive on Spark Web UI > > > Key: HIVE-18525 > URL: https://issues.apache.org/jira/browse/HIVE-18525 > Project: Hive > Issue Type: Sub-task > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar >Priority: Major > Attachments: HIVE-18525.1.patch, HIVE-18525.2.patch, > HIVE-18525.3.patch, HIVE-18525.4.patch, Job-Page-Collapsed.png, > Job-Page-Expanded.png, Map-Explain-Plan.png, Reduce-Explain-Plan.png > > > More of an investigation JIRA. The Spark UI has a "long description" of each > stage in the Spark DAG. Typically one stage in the Spark DAG corresponds to > either a {{MapWork}} or {{ReduceWork}} object. It would be useful if the long > description contained the explain plan of the corresponding work object. > I'm not sure how much additional overhead this would introduce. If not the > full explain plan, then maybe a modified one that just lists out all the > operator tree along with each operator name. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19106) Hive ZooKeeper Locking - Throw and Log
[ https://issues.apache.org/jira/browse/HIVE-19106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kryvenko updated HIVE-19106: - Attachment: HIVE-19106.02.patch > Hive ZooKeeper Locking - Throw and Log > -- > > Key: HIVE-19106 > URL: https://issues.apache.org/jira/browse/HIVE-19106 > Project: Hive > Issue Type: Improvement > Components: Locking >Affects Versions: 3.0.0, 2.3.2 >Reporter: BELUGA BEHR >Assignee: Igor Kryvenko >Priority: Trivial > Labels: noob > Attachments: HIVE-19106.01.patch, HIVE-19106.02.patch > > > {code:java} > ... > } catch (Exception e) { > if (tryNum >= numRetriesForUnLock) { > String name = ((ZooKeeperHiveLock)hiveLock).getPath(); > LOG.error("Node " + name + " can not be deleted after " + > numRetriesForUnLock + " attempts."); > throw new LockException(e); > } > } > {code} > Do not log and throw. Only throw and move the message into the > {{LockException}}. There is already {{error}} level logging up the stack. > https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java#L492-L495 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19106) Hive ZooKeeper Locking - Throw and Log
[ https://issues.apache.org/jira/browse/HIVE-19106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436137#comment-16436137 ] Igor Kryvenko commented on HIVE-19106: -- Reattaching patch for launching tests. > Hive ZooKeeper Locking - Throw and Log > -- > > Key: HIVE-19106 > URL: https://issues.apache.org/jira/browse/HIVE-19106 > Project: Hive > Issue Type: Improvement > Components: Locking >Affects Versions: 3.0.0, 2.3.2 >Reporter: BELUGA BEHR >Assignee: Igor Kryvenko >Priority: Trivial > Labels: noob > Attachments: HIVE-19106.01.patch, HIVE-19106.02.patch > > > {code:java} > ... > } catch (Exception e) { > if (tryNum >= numRetriesForUnLock) { > String name = ((ZooKeeperHiveLock)hiveLock).getPath(); > LOG.error("Node " + name + " can not be deleted after " + > numRetriesForUnLock + " attempts."); > throw new LockException(e); > } > } > {code} > Do not log and throw. Only throw and move the message into the > {{LockException}}. There is already {{error}} level logging up the stack. > https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java#L492-L495 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16144) CompactionInfo doesn't have equals/hashCode but used in Set
[ https://issues.apache.org/jira/browse/HIVE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436143#comment-16436143 ] Igor Kryvenko commented on HIVE-16144: -- [~ekoifman] Hi. May i take this JIRA? > CompactionInfo doesn't have equals/hashCode but used in Set > --- > > Key: HIVE-16144 > URL: https://issues.apache.org/jira/browse/HIVE-16144 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > > CompactionTxnHandler.findPotentialCompactions() uses a Set > but CompactionInfo doesn't have equals/hashCode. > should do the same as CompactionInfo.compareTo() -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-18564) Add a mapper to make plan transformations more easily understandable
[ https://issues.apache.org/jira/browse/HIVE-18564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich resolved HIVE-18564. - Resolution: Fixed Fix Version/s: 3.0.0 this was part of HIVE-17626 > Add a mapper to make plan transformations more easily understandable > > > Key: HIVE-18564 > URL: https://issues.apache.org/jira/browse/HIVE-18564 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Fix For: 3.0.0 > > > This part is started as a small helper class to enable plan independent > mapping of runtime operator informations. But in reality its a bit different; > and might have its own different kind of usages. > Goals were: > * connect plan pieces which are responsible for the same part together; > currently I'm using it to connect RelNode, AST, Operator, RuntimeStats > * make it easy to attach new data > * make it easy to lookup some related information > This concept seems to be also usefull during writing tests; because it > enables the lookup of specific pieces like HiveFilter -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18816) CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type
[ https://issues.apache.org/jira/browse/HIVE-18816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436140#comment-16436140 ] Igor Kryvenko commented on HIVE-18816: -- [~vgarg] [~jcamachorodriguez] Hi guys. May i take this JIRA? I have patch for it. > CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type > -- > > Key: HIVE-18816 > URL: https://issues.apache.org/jira/browse/HIVE-18816 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Jesus Camacho Rodriguez >Priority: Major > > *Reproducer* > {code:sql} > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > CREATE TABLE table_acid(d int, tz timestamp with local time zone) > clustered by (d) into 2 buckets stored as orc TBLPROPERTIES > ('transactional'='true'); > {code} > *Error* > {code:sql} > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.IllegalArgumentException: > Unknown primitive type TIMESTAMPLOCALTZ > {code} > *Error stack* > {noformat} > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.IllegalArgumentException: Unknown primitive type TIMESTAMPLOCALTZ > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:906) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4788) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:389) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2314) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1985) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1687) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1438) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1427) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1345) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1319) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:173) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:59) > [test-classes/:?] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_101] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[?:1.8.0_101] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_101] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_101] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > [junit-4.11.jar:?] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.11.jar:?] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > [junit-4.11.jar:?] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.11.jar:?] > at > org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:92) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > [junit-4.11.jar:
[jira] [Assigned] (HIVE-18816) CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type
[ https://issues.apache.org/jira/browse/HIVE-18816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-18816: -- Assignee: Igor Kryvenko (was: Jesus Camacho Rodriguez) > CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type > -- > > Key: HIVE-18816 > URL: https://issues.apache.org/jira/browse/HIVE-18816 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Igor Kryvenko >Priority: Major > > *Reproducer* > {code:sql} > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > CREATE TABLE table_acid(d int, tz timestamp with local time zone) > clustered by (d) into 2 buckets stored as orc TBLPROPERTIES > ('transactional'='true'); > {code} > *Error* > {code:sql} > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.IllegalArgumentException: > Unknown primitive type TIMESTAMPLOCALTZ > {code} > *Error stack* > {noformat} > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.IllegalArgumentException: Unknown primitive type TIMESTAMPLOCALTZ > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:906) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4788) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:389) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2314) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1985) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1687) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1438) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1427) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1345) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1319) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:173) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:59) > [test-classes/:?] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_101] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[?:1.8.0_101] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_101] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_101] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > [junit-4.11.jar:?] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.11.jar:?] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > [junit-4.11.jar:?] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.11.jar:?] > at > org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:92) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > [junit-4.11.jar:?] > at org.junit.runners.ParentRunner.runLeaf(ParentRunne
[jira] [Commented] (HIVE-18816) CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type
[ https://issues.apache.org/jira/browse/HIVE-18816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436145#comment-16436145 ] Jesus Camacho Rodriguez commented on HIVE-18816: [~ikryvenko], sure, go for it. Thanks > CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type > -- > > Key: HIVE-18816 > URL: https://issues.apache.org/jira/browse/HIVE-18816 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Igor Kryvenko >Priority: Major > > *Reproducer* > {code:sql} > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > CREATE TABLE table_acid(d int, tz timestamp with local time zone) > clustered by (d) into 2 buckets stored as orc TBLPROPERTIES > ('transactional'='true'); > {code} > *Error* > {code:sql} > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.IllegalArgumentException: > Unknown primitive type TIMESTAMPLOCALTZ > {code} > *Error stack* > {noformat} > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.IllegalArgumentException: Unknown primitive type TIMESTAMPLOCALTZ > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:906) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4788) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:389) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2314) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1985) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1687) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1438) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1427) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1345) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1319) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:173) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:59) > [test-classes/:?] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_101] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[?:1.8.0_101] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_101] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_101] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > [junit-4.11.jar:?] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.11.jar:?] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > [junit-4.11.jar:?] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.11.jar:?] > at > org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:92) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > [junit-4.11.jar:?] > at org.junit.runners.Par
[jira] [Updated] (HIVE-18910) Migrate to Murmur hash for shuffle and bucketing
[ https://issues.apache.org/jira/browse/HIVE-18910?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18910: -- Attachment: HIVE-18910.30.patch > Migrate to Murmur hash for shuffle and bucketing > > > Key: HIVE-18910 > URL: https://issues.apache.org/jira/browse/HIVE-18910 > Project: Hive > Issue Type: Task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-18910.1.patch, HIVE-18910.10.patch, > HIVE-18910.11.patch, HIVE-18910.12.patch, HIVE-18910.13.patch, > HIVE-18910.14.patch, HIVE-18910.15.patch, HIVE-18910.16.patch, > HIVE-18910.17.patch, HIVE-18910.18.patch, HIVE-18910.19.patch, > HIVE-18910.2.patch, HIVE-18910.20.patch, HIVE-18910.21.patch, > HIVE-18910.22.patch, HIVE-18910.23.patch, HIVE-18910.24.patch, > HIVE-18910.25.patch, HIVE-18910.26.patch, HIVE-18910.27.patch, > HIVE-18910.28.patch, HIVE-18910.29.patch, HIVE-18910.3.patch, > HIVE-18910.30.patch, HIVE-18910.4.patch, HIVE-18910.5.patch, > HIVE-18910.6.patch, HIVE-18910.7.patch, HIVE-18910.8.patch, HIVE-18910.9.patch > > > Hive uses JAVA hash which is not as good as murmur for better distribution > and efficiency in bucketing a table. > Migrate to murmur hash but still keep backward compatibility for existing > users so that they dont have to reload the existing tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19172) NPE due to null EnvironmentContext in DDLTask
[ https://issues.apache.org/jira/browse/HIVE-19172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436157#comment-16436157 ] Hive QA commented on HIVE-19172: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12918560/HIVE-19172.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 13274 tests executed *Failed tests:* {noformat} TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestDbNotificationListener - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestHCatHiveCompatibility - did not produce a TEST-*.xml file (likely timed out) (batchId=247) TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95) [nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,check_constraint_subquery.q,load_wrong_fileformat.q,check_constraint_udtf.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,check_constraint_nonboolean_expr.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_uri
[jira] [Assigned] (HIVE-16144) CompactionInfo doesn't have equals/hashCode but used in Set
[ https://issues.apache.org/jira/browse/HIVE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman reassigned HIVE-16144: - Assignee: Igor Kryvenko (was: Eugene Koifman) > CompactionInfo doesn't have equals/hashCode but used in Set > --- > > Key: HIVE-16144 > URL: https://issues.apache.org/jira/browse/HIVE-16144 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Igor Kryvenko >Priority: Major > > CompactionTxnHandler.findPotentialCompactions() uses a Set > but CompactionInfo doesn't have equals/hashCode. > should do the same as CompactionInfo.compareTo() -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18609) Results cache invalidation based on ACID table updates
[ https://issues.apache.org/jira/browse/HIVE-18609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436165#comment-16436165 ] Jason Dere commented on HIVE-18609: --- Tried running the test and ran into a hang. One thing I forgot to fix in this patch after HIVE-19127 is that the cache cannot attempt to remove an entry (which tries to take a write lock) during lookup() (where it has already taken a read lock), since ReentrantReadWriteLock does not allow upgrading a read lock to a write lock. Will change the logic so that the invalid entries are not removed until after the read lock is released. {noformat} java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x0007bcb3f3c0> (a java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870) at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199) at java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943) at org.apache.hadoop.hive.ql.cache.results.QueryResultsCache.removeEntry(QueryResultsCache.java:659) <== Tries to take write lock at org.apache.hadoop.hive.ql.cache.results.QueryResultsCache.entryMatches(QueryResultsCache.java:647) at org.apache.hadoop.hive.ql.cache.results.QueryResultsCache.lookup(QueryResultsCache.java:408) <== Took read lock at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.checkResultsCache(SemanticAnalyzer.java:14697) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:12055) at org.apache.hadoop.hive.ql.parse.CalcitePlanner.analyzeInternal(CalcitePlanner.java:314) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:287) at org.apache.hadoop.hive.ql.parse.ExplainSemanticAnalyzer.analyzeInternal(ExplainSemanticAnalyzer.java:160) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:287) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:635) at org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1655) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1602) at org.apache.hadoop.hive.ql.Driver.compileAndRespond(Driver.java:1597) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.compileAndRespond(ReExecDriver.java:126) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:200) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) at org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1438) at org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1412) at org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:177) at org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104) at org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:59) {noformat} > Results cache invalidation based on ACID table updates > -- > > Key: HIVE-18609 > URL: https://issues.apache.org/jira/browse/HIVE-18609 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.1.0 >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-18609.1.patch, HIVE-18609.2.patch, > HIVE-18609.3.patch > > > Look into using the materialized view invalidation mechanisms to > automatically invalidate queries in the results cache if the underlying > tables used in the cached queries have been modified. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16144) CompactionInfo doesn't have equals/hashCode but used in Set
[ https://issues.apache.org/jira/browse/HIVE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436167#comment-16436167 ] Eugene Koifman commented on HIVE-16144: --- sure. Thanks > CompactionInfo doesn't have equals/hashCode but used in Set > --- > > Key: HIVE-16144 > URL: https://issues.apache.org/jira/browse/HIVE-16144 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Igor Kryvenko >Priority: Major > > CompactionTxnHandler.findPotentialCompactions() uses a Set > but CompactionInfo doesn't have equals/hashCode. > should do the same as CompactionInfo.compareTo() -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18816) CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type
[ https://issues.apache.org/jira/browse/HIVE-18816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kryvenko updated HIVE-18816: - Attachment: HIVE-18816.01.patch > CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type > -- > > Key: HIVE-18816 > URL: https://issues.apache.org/jira/browse/HIVE-18816 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Igor Kryvenko >Priority: Major > Attachments: HIVE-18816.01.patch > > > *Reproducer* > {code:sql} > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > CREATE TABLE table_acid(d int, tz timestamp with local time zone) > clustered by (d) into 2 buckets stored as orc TBLPROPERTIES > ('transactional'='true'); > {code} > *Error* > {code:sql} > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.IllegalArgumentException: > Unknown primitive type TIMESTAMPLOCALTZ > {code} > *Error stack* > {noformat} > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.IllegalArgumentException: Unknown primitive type TIMESTAMPLOCALTZ > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:906) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4788) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:389) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2314) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1985) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1687) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1438) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1427) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1345) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1319) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:173) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:59) > [test-classes/:?] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_101] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[?:1.8.0_101] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_101] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_101] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > [junit-4.11.jar:?] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.11.jar:?] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > [junit-4.11.jar:?] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.11.jar:?] > at > org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:92) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > [junit-4.11.jar:?] > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java
[jira] [Updated] (HIVE-18816) CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type
[ https://issues.apache.org/jira/browse/HIVE-18816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kryvenko updated HIVE-18816: - Status: Patch Available (was: Open) > CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type > -- > > Key: HIVE-18816 > URL: https://issues.apache.org/jira/browse/HIVE-18816 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Igor Kryvenko >Priority: Major > Attachments: HIVE-18816.01.patch > > > *Reproducer* > {code:sql} > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > CREATE TABLE table_acid(d int, tz timestamp with local time zone) > clustered by (d) into 2 buckets stored as orc TBLPROPERTIES > ('transactional'='true'); > {code} > *Error* > {code:sql} > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.IllegalArgumentException: > Unknown primitive type TIMESTAMPLOCALTZ > {code} > *Error stack* > {noformat} > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.IllegalArgumentException: Unknown primitive type TIMESTAMPLOCALTZ > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:906) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4788) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:389) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2314) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1985) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1687) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1438) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1427) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1345) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1319) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:173) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:59) > [test-classes/:?] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_101] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[?:1.8.0_101] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_101] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_101] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > [junit-4.11.jar:?] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.11.jar:?] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > [junit-4.11.jar:?] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.11.jar:?] > at > org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:92) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > [junit-4.11.jar:?] > at org.junit.runners.ParentRunner.runLeaf(ParentRunner
[jira] [Updated] (HIVE-16144) CompactionInfo doesn't have equals/hashCode but used in Set
[ https://issues.apache.org/jira/browse/HIVE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kryvenko updated HIVE-16144: - Status: Patch Available (was: Open) > CompactionInfo doesn't have equals/hashCode but used in Set > --- > > Key: HIVE-16144 > URL: https://issues.apache.org/jira/browse/HIVE-16144 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Igor Kryvenko >Priority: Major > Attachments: HIVE-16144.01.patch > > > CompactionTxnHandler.findPotentialCompactions() uses a Set > but CompactionInfo doesn't have equals/hashCode. > should do the same as CompactionInfo.compareTo() -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-16144) CompactionInfo doesn't have equals/hashCode but used in Set
[ https://issues.apache.org/jira/browse/HIVE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kryvenko reassigned HIVE-16144: Assignee: Igor Kryvenko (was: Igor Kryvenko) > CompactionInfo doesn't have equals/hashCode but used in Set > --- > > Key: HIVE-16144 > URL: https://issues.apache.org/jira/browse/HIVE-16144 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Igor Kryvenko >Priority: Major > Attachments: HIVE-16144.01.patch > > > CompactionTxnHandler.findPotentialCompactions() uses a Set > but CompactionInfo doesn't have equals/hashCode. > should do the same as CompactionInfo.compareTo() -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16144) CompactionInfo doesn't have equals/hashCode but used in Set
[ https://issues.apache.org/jira/browse/HIVE-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kryvenko updated HIVE-16144: - Attachment: HIVE-16144.01.patch > CompactionInfo doesn't have equals/hashCode but used in Set > --- > > Key: HIVE-16144 > URL: https://issues.apache.org/jira/browse/HIVE-16144 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Igor Kryvenko >Priority: Major > Attachments: HIVE-16144.01.patch > > > CompactionTxnHandler.findPotentialCompactions() uses a Set > but CompactionInfo doesn't have equals/hashCode. > should do the same as CompactionInfo.compareTo() -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19106) Hive ZooKeeper Locking - Throw and Log
[ https://issues.apache.org/jira/browse/HIVE-19106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kryvenko reassigned HIVE-19106: Assignee: Igor Kryvenko (was: Igor Kryvenko) > Hive ZooKeeper Locking - Throw and Log > -- > > Key: HIVE-19106 > URL: https://issues.apache.org/jira/browse/HIVE-19106 > Project: Hive > Issue Type: Improvement > Components: Locking >Affects Versions: 3.0.0, 2.3.2 >Reporter: BELUGA BEHR >Assignee: Igor Kryvenko >Priority: Trivial > Labels: noob > Attachments: HIVE-19106.01.patch, HIVE-19106.02.patch > > > {code:java} > ... > } catch (Exception e) { > if (tryNum >= numRetriesForUnLock) { > String name = ((ZooKeeperHiveLock)hiveLock).getPath(); > LOG.error("Node " + name + " can not be deleted after " + > numRetriesForUnLock + " attempts."); > throw new LockException(e); > } > } > {code} > Do not log and throw. Only throw and move the message into the > {{LockException}}. There is already {{error}} level logging up the stack. > https://github.com/apache/hive/blob/6d890faf22fd1ede3658a5eed097476eab3c67e9/ql/src/java/org/apache/hadoop/hive/ql/lockmgr/zookeeper/ZooKeeperHiveLockManager.java#L492-L495 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18725) Improve error handling for subqueries if there is wrong column reference
[ https://issues.apache.org/jira/browse/HIVE-18725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kryvenko reassigned HIVE-18725: Assignee: Igor Kryvenko (was: Igor Kryvenko) > Improve error handling for subqueries if there is wrong column reference > > > Key: HIVE-18725 > URL: https://issues.apache.org/jira/browse/HIVE-18725 > Project: Hive > Issue Type: Improvement > Components: Query Planning >Reporter: Vineet Garg >Assignee: Igor Kryvenko >Priority: Major > Attachments: HIVE-18725.01.patch, HIVE-18725.02.patch, > HIVE-18725.03.patch, HIVE-18725.04.patch, HIVE-18725.05.patch > > > If there is a column reference within subquery which doesn't exist Hive > throws misleading error message. > e.g. > {code:sql} > select * from table1 where table1.col1 IN (select col2 from table2 where > table2.col1=table1.non_existing_column) and table1.col1 IN (select 4); > {code} > The above query, assuming table1 doesn't have non_existing_column, will throw > following misleading error: > {noformat} > FAILED: SemanticException Line 0:-1 Unsupported SubQuery Expression 'col1': > Only 1 SubQuery expression is supported. > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18905) HS2: SASL auth loads HiveConf for every JDBC call
[ https://issues.apache.org/jira/browse/HIVE-18905?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kryvenko reassigned HIVE-18905: Assignee: Igor Kryvenko (was: Igor Kryvenko) > HS2: SASL auth loads HiveConf for every JDBC call > - > > Key: HIVE-18905 > URL: https://issues.apache.org/jira/browse/HIVE-18905 > Project: Hive > Issue Type: Bug >Reporter: Gopal V >Assignee: Igor Kryvenko >Priority: Minor > Attachments: HIVE-18905.01.patch, HIVE-18905.03.patch, > HIVE-18905.04.patch, HIVE-18905.patch > > > SASL authentication filter does a new HiveConf() for no good reason. > {code} > public static PasswdAuthenticationProvider > getAuthenticationProvider(AuthMethods authMethod) > throws AuthenticationException { > return getAuthenticationProvider(authMethod, new HiveConf()); > } > {code} > The session HiveConf is not needed to do this operation & it can't be changed > after the HS2 starts up (today). > {code} > org.apache.hadoop.hive.conf.HiveConf.() HiveConf.java:4404 > org.apache.hive.service.auth.AuthenticationProviderFactory.getAuthenticationProvider(AuthenticationProviderFactory$AuthMethods) > AuthenticationProviderFactory.java:61 > org.apache.hive.service.auth.PlainSaslHelper$PlainServerCallbackHandler.handle(Callback[]) > PlainSaslHelper.java:106 > org.apache.hive.service.auth.PlainSaslServer.evaluateResponse(byte[]) > PlainSaslServer.java:103 > org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(byte[]) > TSaslTransport.java:539 > org.apache.thrift.transport.TSaslTransport.open() TSaslTransport.java:283 > org.apache.thrift.transport.TSaslServerTransport.open() > TSaslServerTransport.java:41 > org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TTransport) > TSaslServerTransport.java:216 > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run() > TThreadPoolServer.java:269 > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor$Worker) > ThreadPoolExecutor.java:1142 > java.util.concurrent.ThreadPoolExecutor$Worker.run() > ThreadPoolExecutor.java:617 > java.lang.Thread.run() Thread.java:745 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19141) TestNegativeCliDriver insert_into_notnull_constraint, insert_into_acid_notnull failing
[ https://issues.apache.org/jira/browse/HIVE-19141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kryvenko reassigned HIVE-19141: Assignee: Igor Kryvenko (was: Igor Kryvenko) > TestNegativeCliDriver insert_into_notnull_constraint, > insert_into_acid_notnull failing > -- > > Key: HIVE-19141 > URL: https://issues.apache.org/jira/browse/HIVE-19141 > Project: Hive > Issue Type: Sub-task >Reporter: Vineet Garg >Assignee: Igor Kryvenko >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19141.01.patch > > > These tests have been consistently failing for a while. I suspect HIVE-18727 > has caused these failures. HIVE-18727 changed the code to throw ERROR instead > of EXCEPTION if constraints are violated. I guess Negative cli driver doesn't > handle errors. > Following are full list of related failures: > TestNegativeCliDriver.alter_notnull_constraint_violation > TestNegativeCliDriver.insert_into_acid_notnull > TestNegativeCliDriver.insert_into_notnull_constraint > TestNegativeCliDriver.insert_multi_into_notnull > TestNegativeCliDriver.insert_overwrite_notnull_constraint > TestNegativeCliDriver.update_notnull_constraint -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18983) Add support for table properties inheritance in Create table like
[ https://issues.apache.org/jira/browse/HIVE-18983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kryvenko reassigned HIVE-18983: Assignee: Igor Kryvenko (was: Igor Kryvenko) > Add support for table properties inheritance in Create table like > - > > Key: HIVE-18983 > URL: https://issues.apache.org/jira/browse/HIVE-18983 > Project: Hive > Issue Type: Improvement >Reporter: Igor Kryvenko >Assignee: Igor Kryvenko >Priority: Minor > Fix For: 3.1.0 > > Attachments: HIVE-18983.01.patch, HIVE-18983.02.patch, > HIVE-18983.03.patch, HIVE-18983.04.patch, HIVE-18983.05.patch, > HIVE-18983.06.patch, HIVE-18983.07.patch, HIVE-18983.08.patch, > HIVE-18983.09.patch, HIVE-18983.10.patch > > > Currently, Create table like support table properties. > But it doesn't inherit table properties from original table. > {code} > create table T1(a int, b int) clustered by (a) into 2 buckets stored as orc > TBLPROPERTIES ('comment'='comm'); > create table T like T1; > show create table T; > {code} > *Output:* > {code} > CREATE TABLE `T`( > `a` int, > `b` int) > CLUSTERED BY ( > a) > INTO 2 BUCKETS > ROW FORMAT SERDE > 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' > STORED AS INPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' > LOCATION > 'maprfs:/user/hive/warehouse/t' > TBLPROPERTIES ( > 'COLUMN_STATS_ACCURATE'='{\"BASIC_STATS\":\"true\"}', > 'numFiles'='0', > 'numRows'='0', > 'rawDataSize'='0', > 'totalSize'='0', > 'transient_lastDdlTime'='1521230300') > {code} > It uses just default table properties and doesn't inherit properties from > original table. > It would be great if create table like will inherit origin table properties > and will override them if they are specified in query . -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18845) SHOW COMAPCTIONS should show host name
[ https://issues.apache.org/jira/browse/HIVE-18845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kryvenko reassigned HIVE-18845: Assignee: Igor Kryvenko (was: Igor Kryvenko) > SHOW COMAPCTIONS should show host name > -- > > Key: HIVE-18845 > URL: https://issues.apache.org/jira/browse/HIVE-18845 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Igor Kryvenko >Priority: Minor > Attachments: HIVE-18845.01.patch > > > once the job starts, the WorkerId includes the hostname submitting the job > but before that there is no way to tell which of the Metastores in HA set up > has picked up a given item to compact. Should make it obvious to know which > log to look at. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-18723) CompactorOutputCommitter.commitJob() - check rename() ret val
[ https://issues.apache.org/jira/browse/HIVE-18723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Igor Kryvenko reassigned HIVE-18723: Assignee: Igor Kryvenko (was: Igor Kryvenko) > CompactorOutputCommitter.commitJob() - check rename() ret val > - > > Key: HIVE-18723 > URL: https://issues.apache.org/jira/browse/HIVE-18723 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 1.0.0 >Reporter: Eugene Koifman >Assignee: Igor Kryvenko >Priority: Major > Attachments: HIVE-18723.1.patch, HIVE-18723.2.patch, > HIVE-18723.3.patch, HIVE-18723.patch > > > right now ret val is ignored {{fs.rename(fileStatus.getPath(), newPath); }} > Should this use {{FileUtils.ename(FileSystem fs, Path sourcePath, Path > destPath, Configuration conf) }} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19167) Map data type doesn't keep the order of the key/values pairs as read (Part 2, The Sequel or SQL)
[ https://issues.apache.org/jira/browse/HIVE-19167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436179#comment-16436179 ] Deepak Jaiswal commented on HIVE-19167: --- +1 pending results. > Map data type doesn't keep the order of the key/values pairs as read (Part 2, > The Sequel or SQL) > --- > > Key: HIVE-19167 > URL: https://issues.apache.org/jira/browse/HIVE-19167 > Project: Hive > Issue Type: Bug > Components: Hive >Reporter: Matt McCline >Assignee: Matt McCline >Priority: Critical > Fix For: 3.1.0 > > Attachments: HIVE-19167.01.patch, HIVE-19167.02.patch > > > HIVE-19116: "Vectorization: Vector Map data type doesn't keep the order of > the key/values pairs as read" didn't fix all the places where HashMap is used > instead of LinkedHashMap. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19162) SMB : Test tez_smb_1.q stops making SMB join for a query
[ https://issues.apache.org/jira/browse/HIVE-19162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-19162: -- Attachment: HIVE-19162.1.patch > SMB : Test tez_smb_1.q stops making SMB join for a query > > > Key: HIVE-19162 > URL: https://issues.apache.org/jira/browse/HIVE-19162 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-19162.1.patch > > > The test stopped making SMB join and instead creates a mapjoin. Likely a > change in stats issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19162) SMB : Test tez_smb_1.q stops making SMB join for a query
[ https://issues.apache.org/jira/browse/HIVE-19162?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-19162: -- Status: Patch Available (was: Open) > SMB : Test tez_smb_1.q stops making SMB join for a query > > > Key: HIVE-19162 > URL: https://issues.apache.org/jira/browse/HIVE-19162 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-19162.1.patch > > > The test stopped making SMB join and instead creates a mapjoin. Likely a > change in stats issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19162) SMB : Test tez_smb_1.q stops making SMB join for a query
[ https://issues.apache.org/jira/browse/HIVE-19162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436182#comment-16436182 ] Deepak Jaiswal commented on HIVE-19162: --- [~ashutoshc] can you please review? > SMB : Test tez_smb_1.q stops making SMB join for a query > > > Key: HIVE-19162 > URL: https://issues.apache.org/jira/browse/HIVE-19162 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > Attachments: HIVE-19162.1.patch > > > The test stopped making SMB join and instead creates a mapjoin. Likely a > change in stats issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17477) PidFilePatternConverter not rolling log according to their pid
[ https://issues.apache.org/jira/browse/HIVE-17477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436195#comment-16436195 ] Hive QA commented on HIVE-17477: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 48s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 6s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 44s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 43s{color} | {color:red} ql: The patch generated 5 new + 1 unchanged - 1 fixed = 6 total (was 2) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 17s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10162/dev-support/hive-personality.sh | | git revision | master / b7c64b1 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10162/yetus/diff-checkstyle-ql.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-10162/yetus/patch-asflicense-problems.txt | | modules | C: common llap-server ql standalone-metastore U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10162/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > PidFilePatternConverter not rolling log according to their pid > --- > > Key: HIVE-17477 > URL: https://issues.apache.org/jira/browse/HIVE-17477 > Project: Hive > Issue Type: Bug > Components: Logging >Reporter: Vlad Gudikov >Assignee: Bohdan Chupika >Priority: Major > Attachments: HIVE-17477.1.patch > > > We can use pid in th filePattern: > appender.DRFA.filePattern = > ${sys:hive.log.dir}/${sys:hive.log.file}.%d{-MM-dd}.%pid > But when its time roll the logs at the end of the RollingAppender just > renames hive.log by applying the pattern described above and keeps logs that > are not related to the process this log was named by. The issue is that all > processes are writing to the same log and we cannot separate them by pid. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19191) Assertion error while running materialized view rewriting
[ https://issues.apache.org/jira/browse/HIVE-19191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19191: --- Status: Patch Available (was: In Progress) > Assertion error while running materialized view rewriting > - > > Key: HIVE-19191 > URL: https://issues.apache.org/jira/browse/HIVE-19191 > Project: Hive > Issue Type: Bug > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Nita Dembla >Assignee: Jesus Camacho Rodriguez >Priority: Major > > {code:sql} > jdbc:hive2://localhost:10007/tpcds_bin_par> explain select > w_warehouse_name,avg(inv_quantity_on_hand) from inventory ,warehouse where > inv_warehouse_sk = w_warehouse_sk group by w_warehouse_name; > Error: Error running query: java.lang.AssertionError: rel > [rel#3663:HiveProject.HIVE.[](input=rel#3662:Subset#8.HIVE.[],$f0=$0,$f1=/($1, > $2))] has lower cost {1.248 rows, 1.248 cpu, 0.0 io} than best cost > {8.963453683875E8 rows, 7.967514387E8 cpu, 0.0 io} of subset > [rel#3664:Subset#9.HIVE.[]] (state=,code=0) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19191) Assertion error while running materialized view rewriting
[ https://issues.apache.org/jira/browse/HIVE-19191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez reassigned HIVE-19191: -- > Assertion error while running materialized view rewriting > - > > Key: HIVE-19191 > URL: https://issues.apache.org/jira/browse/HIVE-19191 > Project: Hive > Issue Type: Bug > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Nita Dembla >Assignee: Jesus Camacho Rodriguez >Priority: Major > > {code:sql} > jdbc:hive2://localhost:10007/tpcds_bin_par> explain select > w_warehouse_name,avg(inv_quantity_on_hand) from inventory ,warehouse where > inv_warehouse_sk = w_warehouse_sk group by w_warehouse_name; > Error: Error running query: java.lang.AssertionError: rel > [rel#3663:HiveProject.HIVE.[](input=rel#3662:Subset#8.HIVE.[],$f0=$0,$f1=/($1, > $2))] has lower cost {1.248 rows, 1.248 cpu, 0.0 io} than best cost > {8.963453683875E8 rows, 7.967514387E8 cpu, 0.0 io} of subset > [rel#3664:Subset#9.HIVE.[]] (state=,code=0) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work started] (HIVE-19191) Assertion error while running materialized view rewriting
[ https://issues.apache.org/jira/browse/HIVE-19191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-19191 started by Jesus Camacho Rodriguez. -- > Assertion error while running materialized view rewriting > - > > Key: HIVE-19191 > URL: https://issues.apache.org/jira/browse/HIVE-19191 > Project: Hive > Issue Type: Bug > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Nita Dembla >Assignee: Jesus Camacho Rodriguez >Priority: Major > > {code:sql} > jdbc:hive2://localhost:10007/tpcds_bin_par> explain select > w_warehouse_name,avg(inv_quantity_on_hand) from inventory ,warehouse where > inv_warehouse_sk = w_warehouse_sk group by w_warehouse_name; > Error: Error running query: java.lang.AssertionError: rel > [rel#3663:HiveProject.HIVE.[](input=rel#3662:Subset#8.HIVE.[],$f0=$0,$f1=/($1, > $2))] has lower cost {1.248 rows, 1.248 cpu, 0.0 io} than best cost > {8.963453683875E8 rows, 7.967514387E8 cpu, 0.0 io} of subset > [rel#3664:Subset#9.HIVE.[]] (state=,code=0) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19191) Assertion error while running materialized view rewriting
[ https://issues.apache.org/jira/browse/HIVE-19191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436217#comment-16436217 ] Jesus Camacho Rodriguez commented on HIVE-19191: May be worth exposing factor as a parameter, what do you think [~ashutoshc]? > Assertion error while running materialized view rewriting > - > > Key: HIVE-19191 > URL: https://issues.apache.org/jira/browse/HIVE-19191 > Project: Hive > Issue Type: Bug > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Nita Dembla >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19191.patch > > > {code:sql} > jdbc:hive2://localhost:10007/tpcds_bin_par> explain select > w_warehouse_name,avg(inv_quantity_on_hand) from inventory ,warehouse where > inv_warehouse_sk = w_warehouse_sk group by w_warehouse_name; > Error: Error running query: java.lang.AssertionError: rel > [rel#3663:HiveProject.HIVE.[](input=rel#3662:Subset#8.HIVE.[],$f0=$0,$f1=/($1, > $2))] has lower cost {1.248 rows, 1.248 cpu, 0.0 io} than best cost > {8.963453683875E8 rows, 7.967514387E8 cpu, 0.0 io} of subset > [rel#3664:Subset#9.HIVE.[]] (state=,code=0) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19191) Assertion error while running materialized view rewriting
[ https://issues.apache.org/jira/browse/HIVE-19191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-19191: --- Attachment: HIVE-19191.patch > Assertion error while running materialized view rewriting > - > > Key: HIVE-19191 > URL: https://issues.apache.org/jira/browse/HIVE-19191 > Project: Hive > Issue Type: Bug > Components: Materialized views >Affects Versions: 3.0.0 >Reporter: Nita Dembla >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19191.patch > > > {code:sql} > jdbc:hive2://localhost:10007/tpcds_bin_par> explain select > w_warehouse_name,avg(inv_quantity_on_hand) from inventory ,warehouse where > inv_warehouse_sk = w_warehouse_sk group by w_warehouse_name; > Error: Error running query: java.lang.AssertionError: rel > [rel#3663:HiveProject.HIVE.[](input=rel#3662:Subset#8.HIVE.[],$f0=$0,$f1=/($1, > $2))] has lower cost {1.248 rows, 1.248 cpu, 0.0 io} than best cost > {8.963453683875E8 rows, 7.967514387E8 cpu, 0.0 io} of subset > [rel#3664:Subset#9.HIVE.[]] (state=,code=0) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19186) Multi Table INSERT statements query has a flaw for partitioned table when INSERT INTO and INSERT OVERWRITE are used
[ https://issues.apache.org/jira/browse/HIVE-19186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Yeom updated HIVE-19186: -- Description: One problem test case is: create table intermediate(key int) partitioned by (p int) stored as orc; insert into table intermediate partition(p='455') select distinct key from src where key >= 0 order by key desc limit 2; insert into table intermediate partition(p='456') select distinct key from src where key is not null order by key asc limit 2; insert into table intermediate partition(p='457') select distinct key from src where key >= 100 order by key asc limit 2; create table multi_partitioned (key int, key2 int) partitioned by (p int); from intermediate insert into table multi_partitioned partition(p=2) select p, key insert overwrite table multi_partitioned partition(p=1) select key, p; was: One problem test case is: create table intermediate(key int) partitioned by (p int) stored as orc; insert into table intermediate partition(p='455') select distinct key from src where key >= 0 order by key desc limit 2; insert into table intermediate partition(p='456') select distinct key from src where key is not null order by key asc limit 2; insert into table intermediate partition(p='457') select distinct key from src where key >= 100 order by key asc limit 2; from intermediate insert into table multi_partitioned partition(p=2) select p, key insert overwrite table multi_partitioned partition(p=1) select key, p; > Multi Table INSERT statements query has a flaw for partitioned table when > INSERT INTO and INSERT OVERWRITE are used > --- > > Key: HIVE-19186 > URL: https://issues.apache.org/jira/browse/HIVE-19186 > Project: Hive > Issue Type: Bug > Components: Query Planning >Affects Versions: 3.0.0 >Reporter: Steve Yeom >Assignee: Steve Yeom >Priority: Major > Fix For: 3.0.0 > > > One problem test case is: > create table intermediate(key int) partitioned by (p int) stored as orc; > insert into table intermediate partition(p='455') select distinct key from > src where key >= 0 order by key desc limit 2; > insert into table intermediate partition(p='456') select distinct key from > src where key is not null order by key asc limit 2; > insert into table intermediate partition(p='457') select distinct key from > src where key >= 100 order by key asc limit 2; > create table multi_partitioned (key int, key2 int) partitioned by (p int); > from intermediate > insert into table multi_partitioned partition(p=2) select p, key > insert overwrite table multi_partitioned partition(p=1) select key, p; -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18609) Results cache invalidation based on ACID table updates
[ https://issues.apache.org/jira/browse/HIVE-18609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-18609: -- Attachment: HIVE-18609.4.patch > Results cache invalidation based on ACID table updates > -- > > Key: HIVE-18609 > URL: https://issues.apache.org/jira/browse/HIVE-18609 > Project: Hive > Issue Type: Sub-task >Affects Versions: 3.1.0 >Reporter: Jason Dere >Assignee: Jason Dere >Priority: Major > Attachments: HIVE-18609.1.patch, HIVE-18609.2.patch, > HIVE-18609.3.patch, HIVE-18609.4.patch > > > Look into using the materialized view invalidation mechanisms to > automatically invalidate queries in the results cache if the underlying > tables used in the cached queries have been modified. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19048) Initscript errors are ignored
[ https://issues.apache.org/jira/browse/HIVE-19048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436238#comment-16436238 ] Vihang Karajgaonkar commented on HIVE-19048: Thanks [~bharos92] for the patch Can you confirm if {{TestBeeLineDriver}} works? > Initscript errors are ignored > - > > Key: HIVE-19048 > URL: https://issues.apache.org/jira/browse/HIVE-19048 > Project: Hive > Issue Type: Bug > Components: Beeline >Reporter: Zoltan Haindrich >Assignee: Bharathkrishna Guruvayoor Murali >Priority: Major > Attachments: HIVE-19048.1.patch > > > I've been running some queries for a while when I've noticed that my > initscript has an error; and beeline stops interpreting the initscript after > encountering the first error. > {code} > echo 'invalid;' > init.sql > echo 'select 1;' > s1.sql > beeline -u jdbc:hive2://localhost:1/ -n hive -i init.sql -f s1.sql > [...] > Running init script init.sql > 0: jdbc:hive2://localhost:1/> invalid; > Error: Error while compiling statement: FAILED: ParseException line 1:0 > cannot recognize input near 'invalid' '' '' (state=42000,code=4) > 0: jdbc:hive2://localhost:1/> select 1; > [...] > $ echo $? > 0 > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns
[ https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-18986: Status: In Progress (was: Patch Available) > Table rename will run java.lang.StackOverflowError in dataNucleus if the > table contains large number of columns > --- > > Key: HIVE-18986 > URL: https://issues.apache.org/jira/browse/HIVE-18986 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-18986.1.patch, HIVE-18986.2.patch, > HIVE-18986.3.patch > > > If the table contains a lot of columns e.g, 5k, simple table rename would > fail with the following stack trace. The issue is datanucleus can't handle > the query with lots of colName='c1' && colName='c2' && ... . > > 2018-03-13 17:19:52,770 INFO > org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: > ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: > db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 > 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: > [pool-5-thread-200]: java.lang.StackOverflowError at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns
[ https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-18986: Attachment: (was: HIVE-18986.3.patch) > Table rename will run java.lang.StackOverflowError in dataNucleus if the > table contains large number of columns > --- > > Key: HIVE-18986 > URL: https://issues.apache.org/jira/browse/HIVE-18986 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-18986.1.patch, HIVE-18986.2.patch, > HIVE-18986.3.patch > > > If the table contains a lot of columns e.g, 5k, simple table rename would > fail with the following stack trace. The issue is datanucleus can't handle > the query with lots of colName='c1' && colName='c2' && ... . > > 2018-03-13 17:19:52,770 INFO > org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: > ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: > db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 > 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: > [pool-5-thread-200]: java.lang.StackOverflowError at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18986) Table rename will run java.lang.StackOverflowError in dataNucleus if the table contains large number of columns
[ https://issues.apache.org/jira/browse/HIVE-18986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aihua Xu updated HIVE-18986: Attachment: HIVE-18986.3.patch > Table rename will run java.lang.StackOverflowError in dataNucleus if the > table contains large number of columns > --- > > Key: HIVE-18986 > URL: https://issues.apache.org/jira/browse/HIVE-18986 > Project: Hive > Issue Type: Sub-task > Components: Standalone Metastore >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-18986.1.patch, HIVE-18986.2.patch, > HIVE-18986.3.patch > > > If the table contains a lot of columns e.g, 5k, simple table rename would > fail with the following stack trace. The issue is datanucleus can't handle > the query with lots of colName='c1' && colName='c2' && ... . > > 2018-03-13 17:19:52,770 INFO > org.apache.hadoop.hive.metastore.HiveMetaStore.audit: [pool-5-thread-200]: > ugi=anonymous ip=10.17.100.135 cmd=source:10.17.100.135 alter_table: > db=default tbl=fgv_full_var_pivoted02 newtbl=fgv_full_var_pivoted 2018-03-13 > 17:20:00,495 ERROR org.apache.hadoop.hive.metastore.RetryingHMSHandler: > [pool-5-thread-200]: java.lang.StackOverflowError at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:330) at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) at > org.datanucleus.store.rdbms.sql.SQLText.toSQL(SQLText.java:339) > -- This message was sent by Atlassian JIRA (v7.6.3#76005)