[jira] [Commented] (HIVE-21466) Increase Default Size of SPLIT_MAXSIZE
[ https://issues.apache.org/jira/browse/HIVE-21466?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796822#comment-16796822 ] Hive QA commented on HIVE-21466: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 43s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 37s{color} | {color:blue} common in master has 63 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 9s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 29m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16583/dev-support/hive-personality.sh | | git revision | master / 230db04 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: common ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-16583/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Increase Default Size of SPLIT_MAXSIZE > -- > > Key: HIVE-21466 > URL: https://issues.apache.org/jira/browse/HIVE-21466 > Project: Hive > Issue Type: Improvement > Components: Configuration >Affects Versions: 4.0.0, 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HIVE-21466.1.patch, HIVE-21466.2.patch > > > {code:java} > MAPREDMAXSPLITSIZE(FileInputFormat.SPLIT_MAXSIZE, 25600L, "", true), > {code} > [https://github.com/apache/hive/blob/8d4300a02691777fc96f33861ed27e64fed72f2c/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java#L682] > This field specifies a maximum size for each MR (maybe other?) splits. > This number should be a multiple of the HDFS Block size. The way that this > maximum is implemented, is that each block is added to the split, and if the > split grows to be larger than the maximum allowed, the split is submitted to > the cluster and a new split is opened. > So, imagine the following scenario: > * HDFS block size of 16 bytes > * Maximum size of 40 bytes > This will produce a split with 3 blocks. (2x16) = 32; another block
[jira] [Commented] (HIVE-21422) Add metrics to LRFU cache policy
[ https://issues.apache.org/jira/browse/HIVE-21422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796802#comment-16796802 ] Hive QA commented on HIVE-21422: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12963006/HIVE-21422.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15832 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.ql.TestTxnCommands.testMergeOnTezEdges (batchId=327) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/16582/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16582/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16582/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12963006 - PreCommit-HIVE-Build > Add metrics to LRFU cache policy > > > Key: HIVE-21422 > URL: https://issues.apache.org/jira/browse/HIVE-21422 > Project: Hive > Issue Type: Improvement > Components: llap >Affects Versions: 4.0.0 >Reporter: Oliver Draese >Assignee: Oliver Draese >Priority: Major > Labels: llap > Fix For: 4.0.0 > > Attachments: HIVE-21422.1.patch, HIVE-21422.2.patch, HIVE-21422.patch > > > The LRFU cache policy for the LLAP data cache doesn't provide enough insight > to figure out, what is cached and why something might get evicted. This > ticket is used to add Hadoop metrics 2 information (accessible via JMX) to > the LRFU policy, providing following information: > * How much memory is cached for data buffers > * How much memory is cached for meta data buffers > * How large is the min-heap of the cache policy > * How long is the eviction short list (linked list) > * How much memory is currently "locked" (buffers with positive reference > count) and therefore in use by a query > These new counters are found in the MX bean, following this path: > Hadoop/LlapDaemon/LowLevelLrfuCachePolicy- > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21422) Add metrics to LRFU cache policy
[ https://issues.apache.org/jira/browse/HIVE-21422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796780#comment-16796780 ] Hive QA commented on HIVE-21422: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 46s{color} | {color:blue} llap-server in master has 79 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 16s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} llap-server: The patch generated 0 new + 0 unchanged - 24 fixed = 0 total (was 24) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16582/dev-support/hive-personality.sh | | git revision | master / 230db04 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: llap-server U: llap-server | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-16582/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add metrics to LRFU cache policy > > > Key: HIVE-21422 > URL: https://issues.apache.org/jira/browse/HIVE-21422 > Project: Hive > Issue Type: Improvement > Components: llap >Affects Versions: 4.0.0 >Reporter: Oliver Draese >Assignee: Oliver Draese >Priority: Major > Labels: llap > Fix For: 4.0.0 > > Attachments: HIVE-21422.1.patch, HIVE-21422.2.patch, HIVE-21422.patch > > > The LRFU cache policy for the LLAP data cache doesn't provide enough insight > to figure out, what is cached and why something might get evicted. This > ticket is used to add Hadoop metrics 2 information (accessible via JMX) to > the LRFU policy, providing following information: > * How much memory is cached for data buffers > * How much memory is cached for meta data buffers > * How large is the min-heap of the cache policy > * How long is the eviction short list (linked list) > * How much memory is currently "locked" (buffers with positive reference > count) and therefore in use by a query > These new counters are found in the MX bean, following this path: > Hadoop/LlapDaemon/LowLevelLrfuCachePolicy- > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21409) Initial SessionState ClassLoader Reused For Subsequent Sessions
[ https://issues.apache.org/jira/browse/HIVE-21409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796783#comment-16796783 ] Shawn Weeks commented on HIVE-21409: I've spent a lot of time tracking this down and I've run into a dead end with a call to ReflectionUtils in HiveUtils. getAuthorizerFactory. For some reason this call is changing the class loader for the current thread. > Initial SessionState ClassLoader Reused For Subsequent Sessions > --- > > Key: HIVE-21409 > URL: https://issues.apache.org/jira/browse/HIVE-21409 > Project: Hive > Issue Type: Bug >Affects Versions: 1.2.1 >Reporter: Shawn Weeks >Priority: Minor > Attachments: create_class.sql, run.sql, setup.sql > > > It appears that the first ClassLoader attached to a SessionState Static > Instance is being reused as the parent for all future sessions. This causes > any libraries added to the class path on the initial session to be added to > future sessions. It also appears that further sessions may be adding jars to > this initial ClassLoader as well leading to the class path getting more and > more polluted. This occurring on a build including HIVE-11878. I've included > some examples that greatly exaggerate the problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21469) Review of ZooKeeperHiveLockManager
[ https://issues.apache.org/jira/browse/HIVE-21469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796771#comment-16796771 ] Hive QA commented on HIVE-21469: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12963004/HIVE-21469.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 135 failed/errored test(s), 12413 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=267) TestAutoPurgeTables - did not produce a TEST-*.xml file (likely timed out) (batchId=254) TestBlobstoreCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=278) TestClearDanglingScratchDir - did not produce a TEST-*.xml file (likely timed out) (batchId=254) TestCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=1) [udf_upper.q,ctas_date.q,materialized_view_rewrite_part_2.q,groupby_grouping_sets3.q,vector_decimal_5.q,vector_case_when_conversion.q,bucket_map_join_spark4.q,timestamp_2.q,schema_evol_orc_acid_table_update_llap_io.q,date_join1.q,constprog_type.q,timestamp_ints_casts.q,udf_negative.q,orc_merge_diff_fs.q,udf_substring_index.q,results_cache_lifetime.q,cross_prod_3.q,masking_12.q,diff_part_input_formats.q,auto_join_without_localtask.q,join46.q,ctas_uses_table_location.q,tez_bmj_schema_evolution.q,bucketmapjoin4.q,udf_context_aware.q,authorization_non_id.q,mapjoin_test_outer.q,vectorization_9.q,input15.q,udf_PI.q] TestCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=10) [msck_repair_acid.q,parquet_ppd_decimal.q,authorization_load.q,udf_md5.q,alter_view_as_select.q,groupby_sort_6.q,limit0.q,select_as_omitted.q,decimal_udf.q,list_bucket_query_oneskew_3.q,vector_null_projection.q,smb_mapjoin_12.q,vector_cast_constant.q,encryption_insert_partition_static.q,semijoin3.q,reloadJar.q,orc_merge_incompat3.q,groupby2_limit.q,sort_merge_join_desc_1.q,vector_fullouter_mapjoin_1_optimized.q,ba_table3.q,mi.q,input0.q,parquet_map_of_arrays_of_ints.q,auto_join11.q,delete_whole_partition.q,authorization_view_disable_cbo_3.q,avro_date.q,udf_like.q,compute_stats_decimal.q] TestCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=11) [cbo_rp_lineage2.q,join_emit_interval.q,druidkafkamini_avro.q,mm_iow_temp.q,mapjoin2.q,schema_evol_text_vec_table_llap_io.q,vector_windowing_navfn.q,vectorization_12.q,vector_number_compare_projection.q,parquet_table_with_subschema.q,parquet_join2.q,authorization_8.q,exim_07_all_part_over_nonoverlap.q,parquet_ppd_char.q,udf_locate.q,nullgroup4_multi_distinct.q,join_rc.q,bucket6.q,schema_evol_text_vecrow_table.q,udf_likeany.q,orc_ppd_char.q,udf_boolean.q,udf_xpath_double.q,orc_ppd_same_table_multiple_aliases.q,materialized_view_drop.q,alter2.q,vector_full_outer_join.q,typechangetest.q,distinct_66.q,rcfile_merge4.q] TestCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=12) [skewjoinopt15.q,vector_coalesce.q,udf_elt.q,join44.q,udf5.q,udf_case_thrift.q,inputddl2.q,join_constraints_optimization.q,correlationoptimizer13.q,udf_testlength.q,parquet_vectorization_1.q,ppd_repeated_alias.q,correlationoptimizer4.q,ppd_windowing2.q,udf_union.q,masking_1_newdb.q,vector_leftsemi_mapjoin.q,udf_string.q,repl_1_drop.q,cbo_rp_insert.q,ppd_clusterby.q,udf_trunc_number.q,union10.q,vector_if_expr.q,load_dyn_part7.q,llap_acid2.q,groupby_constcolval.q,acid_mapjoin.q,explainanalyze_5.q,distinct_windowing.q] TestCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=13) [load_dyn_part3.q,autoColumnStats_4.q,drop_table.q,stat_estimate_drill.q,auto_join33.q,parquet_ppd_varchar.q,udf_sha2.q,groupby5_map_skew.q,merge4.q,storage_format_descriptor.q,mapjoin_hook.q,multi_column_in_single.q,schema_evol_orc_nonvec_table.q,cbo_rp_subq_in.q,authorization_view_disable_cbo_4.q,list_bucket_dml_2.q,cbo_rp_semijoin.q,char_2.q,non_ascii_literal2.q,load_part_authsuccess.q,auto_sortmerge_join_15.q,explain_rearrange.q,stats_nonpart.q,varchar_union1.q,vectorized_mapjoin3.q,input21.q,vector_udf2.q,groupby_cube_multi_gby.q,annotate_stats_limit.q,union34.q] TestCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=14) [auto_join18.q,partition_coltype_literals.q,input1_limit.q,udf_divide.q,input40.q,udf_round_2_auto_stats.q,correlationoptimizer8.q,auto_sortmerge_join_14.q,udf_array_contains.q,union22.q,non_ascii_literal1.q,bucket_map_join_tez2.q,sample_islocalmode_hook.q,literal_decimal.q,constprog2.q,parquet_external_time.q,infer_bucket_sort_reducers_power_two.q,input20.q,smb_join_partition_key.q,union_remove_14.q,vector_count.q,udf_if.q,input38.q,decimal_trailing.q,load_fs_overwrite.q,notable_alias2.q,join_reorder.q,constant_prop.q,sample1.q,bucketmapjoin8.q] TestCliDriver - did not produce a TEST-*.xml
[jira] [Updated] (HIVE-21446) Hive Server going OOM during hive external table replications
[ https://issues.apache.org/jira/browse/HIVE-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-21446: --- Status: Patch Available (was: Open) > Hive Server going OOM during hive external table replications > - > > Key: HIVE-21446 > URL: https://issues.apache.org/jira/browse/HIVE-21446 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21446.01.patch, HIVE-21446.02.patch, > HIVE-21446.03.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > > The file system objects opened using proxy users are not closed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21446) Hive Server going OOM during hive external table replications
[ https://issues.apache.org/jira/browse/HIVE-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-21446: --- Attachment: HIVE-21446.03.patch > Hive Server going OOM during hive external table replications > - > > Key: HIVE-21446 > URL: https://issues.apache.org/jira/browse/HIVE-21446 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21446.01.patch, HIVE-21446.02.patch, > HIVE-21446.03.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > > The file system objects opened using proxy users are not closed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21446) Hive Server going OOM during hive external table replications
[ https://issues.apache.org/jira/browse/HIVE-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-21446: --- Status: Open (was: Patch Available) > Hive Server going OOM during hive external table replications > - > > Key: HIVE-21446 > URL: https://issues.apache.org/jira/browse/HIVE-21446 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21446.01.patch, HIVE-21446.02.patch, > HIVE-21446.03.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > > The file system objects opened using proxy users are not closed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21446) Hive Server going OOM during hive external table replications
[ https://issues.apache.org/jira/browse/HIVE-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] mahesh kumar behera updated HIVE-21446: --- Attachment: (was: HIVE-21446.03.patch) > Hive Server going OOM during hive external table replications > - > > Key: HIVE-21446 > URL: https://issues.apache.org/jira/browse/HIVE-21446 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21446.01.patch, HIVE-21446.02.patch, > HIVE-21446.03.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > > The file system objects opened using proxy users are not closed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21283) Create Synonym mid for substr, position for locate
[ https://issues.apache.org/jira/browse/HIVE-21283?focusedWorklogId=215939=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-215939 ] ASF GitHub Bot logged work on HIVE-21283: - Author: ASF GitHub Bot Created on: 20/Mar/19 01:38 Start Date: 20/Mar/19 01:38 Worklog Time Spent: 10m Work Description: sankarh commented on issue #540: HIVE-21283 Synonyms for the existing functions URL: https://github.com/apache/hive/pull/540#issuecomment-474649754 @rmsmani Yes, that is a problem with Hive ptest. Lot of flaky tests. But, it was mandated to have a green build before commit. Please re-submit the same patch and try again. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 215939) Time Spent: 2h 50m (was: 2h 40m) > Create Synonym mid for substr, position for locate > > > Key: HIVE-21283 > URL: https://issues.apache.org/jira/browse/HIVE-21283 > Project: Hive > Issue Type: New Feature >Reporter: Mani M >Assignee: Mani M >Priority: Minor > Labels: UDF, pull-request-available, todoc4.0 > Fix For: 4.0.0 > > Attachments: HIVE.21283.03.PATCH, HIVE.21283.04.PATCH, > HIVE.21283.05.PATCH, HIVE.21283.06.PATCH, HIVE.21283.07.PATCH, > HIVE.21283.08.PATCH, HIVE.21283.09.PATCH, HIVE.21283.2.PATCH, > HIVE.21283.PATCH, image-2019-03-16-21-31-15-541.png, > image-2019-03-16-21-33-18-898.png > > Time Spent: 2h 50m > Remaining Estimate: 0h > > Create new synonym for the existing function > > Mid for substr > postiion for locate -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21456) Hive Metastore HTTP Thrift
[ https://issues.apache.org/jira/browse/HIVE-21456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amit Khanna updated HIVE-21456: --- Attachment: HIVE-21456.4.patch > Hive Metastore HTTP Thrift > -- > > Key: HIVE-21456 > URL: https://issues.apache.org/jira/browse/HIVE-21456 > Project: Hive > Issue Type: New Feature > Components: Metastore, Standalone Metastore >Reporter: Amit Khanna >Assignee: Amit Khanna >Priority: Major > Attachments: HIVE-21456.2.patch, HIVE-21456.3.patch, > HIVE-21456.4.patch, HIVE-21456.patch > > > Hive Metastore currently doesn't have support for HTTP transport because of > which it is not possible to access it via Knox. Adding support for Thrift > over HTTP transport will allow the clients to access via Knox -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21482) Partition discovery table property is added to non-partitioned external tables
[ https://issues.apache.org/jira/browse/HIVE-21482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-21482: - Attachment: HIVE-21482.1.patch > Partition discovery table property is added to non-partitioned external tables > -- > > Key: HIVE-21482 > URL: https://issues.apache.org/jira/browse/HIVE-21482 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-21482.1.patch > > > Automatic partition discovery is added to external tables by default. But it > doesn't check if the external table is partitioned or not. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21482) Partition discovery table property is added to non-partitioned external tables
[ https://issues.apache.org/jira/browse/HIVE-21482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-21482: - Status: Patch Available (was: Open) > Partition discovery table property is added to non-partitioned external tables > -- > > Key: HIVE-21482 > URL: https://issues.apache.org/jira/browse/HIVE-21482 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-21482.1.patch > > > Automatic partition discovery is added to external tables by default. But it > doesn't check if the external table is partitioned or not. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21482) Partition discovery table property is added to non-partitioned external tables
[ https://issues.apache.org/jira/browse/HIVE-21482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796663#comment-16796663 ] Prasanth Jayachandran commented on HIVE-21482: -- [~jdere] can you please review this patch? HIVE-20707 added partition discovery table property for all external tables without looking if the table is partitioned or not. This patch fixes it. > Partition discovery table property is added to non-partitioned external tables > -- > > Key: HIVE-21482 > URL: https://issues.apache.org/jira/browse/HIVE-21482 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-21482.1.patch > > > Automatic partition discovery is added to external tables by default. But it > doesn't check if the external table is partitioned or not. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21469) Review of ZooKeeperHiveLockManager
[ https://issues.apache.org/jira/browse/HIVE-21469?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796662#comment-16796662 ] Hive QA commented on HIVE-21469: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 53s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 11s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 36s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 39s{color} | {color:green} ql: The patch generated 0 new + 10 unchanged - 23 fixed = 10 total (was 33) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16581/dev-support/hive-personality.sh | | git revision | master / 230db04 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-16581/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Review of ZooKeeperHiveLockManager > -- > > Key: HIVE-21469 > URL: https://issues.apache.org/jira/browse/HIVE-21469 > Project: Hive > Issue Type: Improvement > Components: Locking >Affects Versions: 4.0.0, 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HIVE-21469.1.patch, HIVE-21469.2.patch > > > A lot of sins in this class to resolve: > {code:java} > @Override > public void setContext(HiveLockManagerCtx ctx) throws LockException { > try { > curatorFramework = CuratorFrameworkSingleton.getInstance(conf); > parent = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_NAMESPACE); > try{ > curatorFramework.create().withMode(CreateMode.PERSISTENT).forPath("/" > + parent, new byte[0]); > } catch (Exception e) { > // ignore if the parent already exists > if (!(e instanceof KeeperException) || ((KeeperException)e).code() != > KeeperException.Code.NODEEXISTS) { > LOG.warn("Unexpected ZK exception when creating parent node /" + > parent, e); > } > } > {code} > Every time a new session is created and this {{setContext}} method is called, > it attempts to create the root node. I have seen that, even though the root > node exists, an create node action is written to the ZK logs. Check first if > the node exists before trying to create it. > {code:java} > try { > curatorFramework.delete().forPath(zLock.getPath()); > } catch (InterruptedException ie) { >
[jira] [Commented] (HIVE-21468) Case sensitivity in identifier names for JDBC storage handler
[ https://issues.apache.org/jira/browse/HIVE-21468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796656#comment-16796656 ] Jesus Camacho Rodriguez commented on HIVE-21468: [~daijy], could you take a look? Thanks > Case sensitivity in identifier names for JDBC storage handler > - > > Key: HIVE-21468 > URL: https://issues.apache.org/jira/browse/HIVE-21468 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-21468.01.patch, HIVE-21468.02.patch, > HIVE-21468.patch > > > Currently, when Calcite generates the SQL query for the JDBC storage handler, > it will ignore capitalization for the identifiers names, which can lead to > errors at execution time (though the query is properly generated). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21468) Case sensitivity in identifier names for JDBC storage handler
[ https://issues.apache.org/jira/browse/HIVE-21468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796652#comment-16796652 ] Hive QA commented on HIVE-21468: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12963002/HIVE-21468.02.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 15833 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/16580/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16580/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16580/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12963002 - PreCommit-HIVE-Build > Case sensitivity in identifier names for JDBC storage handler > - > > Key: HIVE-21468 > URL: https://issues.apache.org/jira/browse/HIVE-21468 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-21468.01.patch, HIVE-21468.02.patch, > HIVE-21468.patch > > > Currently, when Calcite generates the SQL query for the JDBC storage handler, > it will ignore capitalization for the identifiers names, which can lead to > errors at execution time (though the query is properly generated). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Work logged] (HIVE-21283) Create Synonym mid for substr, position for locate
[ https://issues.apache.org/jira/browse/HIVE-21283?focusedWorklogId=215908=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-215908 ] ASF GitHub Bot logged work on HIVE-21283: - Author: ASF GitHub Bot Created on: 20/Mar/19 00:32 Start Date: 20/Mar/19 00:32 Worklog Time Spent: 10m Work Description: rmsmani commented on issue #540: HIVE-21283 Synonyms for the existing functions URL: https://github.com/apache/hive/pull/540#issuecomment-474636941 Hi @sankarh Submitted patch 9 yday night, failed due to yet another Falky test This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org Issue Time Tracking --- Worklog Id: (was: 215908) Time Spent: 2h 40m (was: 2.5h) > Create Synonym mid for substr, position for locate > > > Key: HIVE-21283 > URL: https://issues.apache.org/jira/browse/HIVE-21283 > Project: Hive > Issue Type: New Feature >Reporter: Mani M >Assignee: Mani M >Priority: Minor > Labels: UDF, pull-request-available, todoc4.0 > Fix For: 4.0.0 > > Attachments: HIVE.21283.03.PATCH, HIVE.21283.04.PATCH, > HIVE.21283.05.PATCH, HIVE.21283.06.PATCH, HIVE.21283.07.PATCH, > HIVE.21283.08.PATCH, HIVE.21283.09.PATCH, HIVE.21283.2.PATCH, > HIVE.21283.PATCH, image-2019-03-16-21-31-15-541.png, > image-2019-03-16-21-33-18-898.png > > Time Spent: 2h 40m > Remaining Estimate: 0h > > Create new synonym for the existing function > > Mid for substr > postiion for locate -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21468) Case sensitivity in identifier names for JDBC storage handler
[ https://issues.apache.org/jira/browse/HIVE-21468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796642#comment-16796642 ] Hive QA commented on HIVE-21468: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 53s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 4s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 34s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 18s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 24s{color} | {color:blue} jdbc-handler in master has 12 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 42s{color} | {color:red} ql: The patch generated 2 new + 15 unchanged - 0 fixed = 17 total (was 15) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} jdbc-handler: The patch generated 3 new + 8 unchanged - 0 fixed = 11 total (was 8) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 27s{color} | {color:green} ql in the patch passed. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 33s{color} | {color:green} jdbc-handler generated 0 new + 11 unchanged - 1 fixed = 11 total (was 12) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 30m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16580/dev-support/hive-personality.sh | | git revision | master / 230db04 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-16580/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-16580/yetus/diff-checkstyle-jdbc-handler.txt | | modules | C: ql jdbc-handler itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-16580/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Case sensitivity in identifier names for JDBC storage handler > - > > Key: HIVE-21468 > URL: https://issues.apache.org/jira/browse/HIVE-21468 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-21468.01.patch, HIVE-21468.02.patch, > HIVE-21468.patch > > > Currently, when Calcite generates the SQL query for the
[jira] [Assigned] (HIVE-21482) Partition discovery table property is added to non-partitioned external tables
[ https://issues.apache.org/jira/browse/HIVE-21482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran reassigned HIVE-21482: > Partition discovery table property is added to non-partitioned external tables > -- > > Key: HIVE-21482 > URL: https://issues.apache.org/jira/browse/HIVE-21482 > Project: Hive > Issue Type: Bug >Affects Versions: 4.0.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > > Automatic partition discovery is added to external tables by default. But it > doesn't check if the external table is partitioned or not. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21386) Extend the fetch task enhancement done in HIVE-21279 to make it work with query result cache
[ https://issues.apache.org/jira/browse/HIVE-21386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796609#comment-16796609 ] Hive QA commented on HIVE-21386: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12962901/HIVE-21386.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 53 failed/errored test(s), 15832 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[input13] (batchId=82) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[merge_empty] (batchId=41) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[null_column] (batchId=27) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[nullformat] (batchId=47) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge10] (batchId=69) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge11] (batchId=43) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge3] (batchId=29) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge4] (batchId=71) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge5] (batchId=61) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge6] (batchId=37) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge8] (batchId=91) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge9] (batchId=29) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge_incompat1] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge_incompat2] (batchId=91) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge_incompat3] (batchId=10) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge_incompat_schema] (batchId=42) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[orc_merge_incompat_writer_version] (batchId=94) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[rename_external_partition_location] (batchId=53) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[subq] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[union] (batchId=4) org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver[hbase_bulk] (batchId=108) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge10] (batchId=156) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge3] (batchId=153) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[orc_merge4] (batchId=156) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[results_cache_diff_fs] (batchId=155) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original] (batchId=181) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge11] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge5] (batchId=174) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge6] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge7] (batchId=182) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge8] (batchId=182) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge9] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge_incompat1] (batchId=177) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge_incompat2] (batchId=182) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge_incompat3] (batchId=161) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge_incompat_schema] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[orc_merge_incompat_writer_version] (batchId=183) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_orc_merge_incompat_schema] (batchId=179) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge3] (batchId=190) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge4] (batchId=192) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge5] (batchId=191) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge6] (batchId=191) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge7] (batchId=193) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge8] (batchId=193) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge9] (batchId=190) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge_incompat1] (batchId=192) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[orc_merge_incompat2] (batchId=193)
[jira] [Comment Edited] (HIVE-21480) Fix test TestHiveMetaStore.testJDOPersistanceManagerCleanup
[ https://issues.apache.org/jira/browse/HIVE-21480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796607#comment-16796607 ] Morio Ramdenbourg edited comment on HIVE-21480 at 3/19/19 11:15 PM: Update: the current implementation of the test does nothing. The getJDOPersistenceManagerCacheSize method was always returning -1, since the fields in ObjectStore had been updated, and it could not grab the actual size of the PersistenceManagerCache. was (Author: mramdenbourg): Update: the test was testing nothing. The getJDOPersistenceManagerCacheSize method was always returning -1, since the fields in ObjectStore had been updated, and it could not grab the actual size of the PersistenceManagerCache. > Fix test TestHiveMetaStore.testJDOPersistanceManagerCleanup > --- > > Key: HIVE-21480 > URL: https://issues.apache.org/jira/browse/HIVE-21480 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 4.0.0 >Reporter: Morio Ramdenbourg >Assignee: Morio Ramdenbourg >Priority: Major > > [TestHiveMetaStore#testJDOPersistanceManagerCleanup|https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java#L3140-L3162] > tests whether the JDO persistence manager cache cleanup was performed > correctly when a HiveMetaStoreClient executes an API call, and closes. It > does this by ensuring that the cache object count before the API call, and > after closing, are the same. However, there are some assumptions that are not > always correct, and can cause flakiness. > For example, lingering resources could be present from previous tests or from > setup depending on how PTest runs it, and can cause the object count to > sometimes be incorrect. We should rewrite this test to account for this > flakiness that can occur. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21480) Fix test TestHiveMetaStore.testJDOPersistanceManagerCleanup
[ https://issues.apache.org/jira/browse/HIVE-21480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796607#comment-16796607 ] Morio Ramdenbourg commented on HIVE-21480: -- Update: the test was testing nothing. The getJDOPersistenceManagerCacheSize method was always returning -1, since the fields in ObjectStore had been updated, and it could not grab the actual size of the PersistenceManagerCache. > Fix test TestHiveMetaStore.testJDOPersistanceManagerCleanup > --- > > Key: HIVE-21480 > URL: https://issues.apache.org/jira/browse/HIVE-21480 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 4.0.0 >Reporter: Morio Ramdenbourg >Assignee: Morio Ramdenbourg >Priority: Major > > [TestHiveMetaStore#testJDOPersistanceManagerCleanup|https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java#L3140-L3162] > tests whether the JDO persistence manager cache cleanup was performed > correctly when a HiveMetaStoreClient executes an API call, and closes. It > does this by ensuring that the cache object count before the API call, and > after closing, are the same. However, there are some assumptions that are not > always correct, and can cause flakiness. > For example, lingering resources could be present from previous tests or from > setup depending on how PTest runs it, and can cause the object count to > sometimes be incorrect. We should rewrite this test to account for this > flakiness that can occur. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan
[ https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796587#comment-16796587 ] Deepak Jaiswal commented on HIVE-21304: --- [~vgarg] checkout OperatorFactory.java for details. The property propagates from the parent op. The code there is intentionally agnostic to operator type. I believe there were other reasons too but I can't recall them. > Show Bucketing version for ReduceSinkOp in explain extended plan > > > Key: HIVE-21304 > URL: https://issues.apache.org/jira/browse/HIVE-21304 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-21304.01.patch > > > Show Bucketing version for ReduceSinkOp in explain extended plan. > This helps identify what hashing algorithm is being used by by ReduceSinkOp. > > cc [~vgarg] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21386) Extend the fetch task enhancement done in HIVE-21279 to make it work with query result cache
[ https://issues.apache.org/jira/browse/HIVE-21386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796594#comment-16796594 ] Hive QA commented on HIVE-21386: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 13s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 46s{color} | {color:red} ql: The patch generated 18 new + 615 unchanged - 0 fixed = 633 total (was 615) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 34s{color} | {color:red} ql generated 1 new + 2254 unchanged - 1 fixed = 2255 total (was 2255) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 57s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Dead store to fakePathExists in org.apache.hadoop.hive.ql.cache.results.QueryResultsCache.setEntryValid(QueryResultsCache$CacheEntry, FetchWork) At QueryResultsCache.java:org.apache.hadoop.hive.ql.cache.results.QueryResultsCache.setEntryValid(QueryResultsCache$CacheEntry, FetchWork) At QueryResultsCache.java:[line 551] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16579/dev-support/hive-personality.sh | | git revision | master / 230db04 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-16579/yetus/diff-checkstyle-ql.txt | | findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-16579/yetus/new-findbugs-ql.html | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-16579/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Extend the fetch task enhancement done in HIVE-21279 to make it work with > query result cache > > > Key: HIVE-21386 > URL: https://issues.apache.org/jira/browse/HIVE-21386 > Project: Hive > Issue Type: Improvement >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-21386.1.patch > > > The improvement done in HIVE-21279 is disabled for query cache. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21480) Fix test TestHiveMetaStore.testJDOPersistanceManagerCleanup
[ https://issues.apache.org/jira/browse/HIVE-21480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Morio Ramdenbourg updated HIVE-21480: - Description: [TestHiveMetaStore#testJDOPersistanceManagerCleanup|https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java#L3140-L3162] tests whether the JDO persistence manager cache cleanup was performed correctly when a HiveMetaStoreClient executes an API call, and closes. It does this by ensuring that the cache object count before the API call, and after closing, are the same. However, there are some assumptions that are not always correct, and can cause flakiness. For example, lingering resources could be present from previous tests or from setup depending on how PTest runs it, and can cause the object count to sometimes be incorrect. We should rewrite this test to account for this flakiness that can occur. was: [TestRemoteHiveMetaStore#testJDOPersistanceManagerCleanup|https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java#L3140-L3162] tests whether the JDO persistence manager cache cleanup was performed correctly when a HiveMetaStoreClient executes an API call, and closes. It does this by ensuring that the cache object count before the API call, and after closing, are the same. However, there are some assumptions that are not always correct, and can cause flakiness. For example, lingering resources could be present from previous tests or from setup depending on how PTest runs it, and can cause the object count to sometimes be incorrect. We should rewrite this test to account for this flakiness that can occur. > Fix test TestHiveMetaStore.testJDOPersistanceManagerCleanup > --- > > Key: HIVE-21480 > URL: https://issues.apache.org/jira/browse/HIVE-21480 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 4.0.0 >Reporter: Morio Ramdenbourg >Assignee: Morio Ramdenbourg >Priority: Major > > [TestHiveMetaStore#testJDOPersistanceManagerCleanup|https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java#L3140-L3162] > tests whether the JDO persistence manager cache cleanup was performed > correctly when a HiveMetaStoreClient executes an API call, and closes. It > does this by ensuring that the cache object count before the API call, and > after closing, are the same. However, there are some assumptions that are not > always correct, and can cause flakiness. > For example, lingering resources could be present from previous tests or from > setup depending on how PTest runs it, and can cause the object count to > sometimes be incorrect. We should rewrite this test to account for this > flakiness that can occur. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21480) Fix test TestHiveMetaStore.testJDOPersistanceManagerCleanup
[ https://issues.apache.org/jira/browse/HIVE-21480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Morio Ramdenbourg updated HIVE-21480: - Summary: Fix test TestHiveMetaStore.testJDOPersistanceManagerCleanup (was: Fix test TestRemoteHiveMetaStore.testJDOPersistanceManagerCleanup) > Fix test TestHiveMetaStore.testJDOPersistanceManagerCleanup > --- > > Key: HIVE-21480 > URL: https://issues.apache.org/jira/browse/HIVE-21480 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 4.0.0 >Reporter: Morio Ramdenbourg >Assignee: Morio Ramdenbourg >Priority: Major > > [TestRemoteHiveMetaStore#testJDOPersistanceManagerCleanup|https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java#L3140-L3162] > tests whether the JDO persistence manager cache cleanup was performed > correctly when a HiveMetaStoreClient executes an API call, and closes. It > does this by ensuring that the cache object count before the API call, and > after closing, are the same. However, there are some assumptions that are not > always correct, and can cause flakiness. > For example, lingering resources could be present from previous tests or from > setup depending on how PTest runs it, and can cause the object count to > sometimes be incorrect. We should rewrite this test to account for this > flakiness that can occur. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21480) Fix test TestRemoteHiveMetaStore.testJDOPersistanceManagerCleanup
[ https://issues.apache.org/jira/browse/HIVE-21480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Morio Ramdenbourg reassigned HIVE-21480: > Fix test TestRemoteHiveMetaStore.testJDOPersistanceManagerCleanup > - > > Key: HIVE-21480 > URL: https://issues.apache.org/jira/browse/HIVE-21480 > Project: Hive > Issue Type: Improvement > Components: Standalone Metastore >Affects Versions: 4.0.0 >Reporter: Morio Ramdenbourg >Assignee: Morio Ramdenbourg >Priority: Major > > [TestRemoteHiveMetaStore#testJDOPersistanceManagerCleanup|https://github.com/apache/hive/blob/master/standalone-metastore/metastore-server/src/test/java/org/apache/hadoop/hive/metastore/TestHiveMetaStore.java#L3140-L3162] > tests whether the JDO persistence manager cache cleanup was performed > correctly when a HiveMetaStoreClient executes an API call, and closes. It > does this by ensuring that the cache object count before the API call, and > after closing, are the same. However, there are some assumptions that are not > always correct, and can cause flakiness. > For example, lingering resources could be present from previous tests or from > setup depending on how PTest runs it, and can cause the object count to > sometimes be incorrect. We should rewrite this test to account for this > flakiness that can occur. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21034) Add option to schematool to drop Hive databases
[ https://issues.apache.org/jira/browse/HIVE-21034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796568#comment-16796568 ] Hive QA commented on HIVE-21034: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12962986/HIVE-21034.5.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 12 failed/errored test(s), 15836 tests executed *Failed tests:* {noformat} org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.testCancelRenewTokenFlow (batchId=276) org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.testConnection (batchId=276) org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.testIsValid (batchId=276) org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.testIsValidNeg (batchId=276) org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.testNegativeProxyAuth (batchId=276) org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.testNegativeTokenAuth (batchId=276) org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.testNoKrbSASLTokenAuthNeg (batchId=276) org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.testNonKrbSASLAuth (batchId=276) org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.testNonKrbSASLFullNameAuth (batchId=276) org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.testProxyAuth (batchId=276) org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.testRenewDelegationToken (batchId=276) org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.testTokenAuth (batchId=276) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/16578/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16578/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16578/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 12 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12962986 - PreCommit-HIVE-Build > Add option to schematool to drop Hive databases > --- > > Key: HIVE-21034 > URL: https://issues.apache.org/jira/browse/HIVE-21034 > Project: Hive > Issue Type: Improvement >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Major > Attachments: HIVE-21034.1.patch, HIVE-21034.2.patch, > HIVE-21034.2.patch, HIVE-21034.3.patch, HIVE-21034.4.patch, HIVE-21034.5.patch > > > An option to remove all Hive managed data could be a useful addition to > {{schematool}}. > I propose to introduce a new flag {{-dropAllDatabases}} that would *drop all > databases with CASCADE* to remove all data of managed tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21034) Add option to schematool to drop Hive databases
[ https://issues.apache.org/jira/browse/HIVE-21034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796531#comment-16796531 ] Hive QA commented on HIVE-21034: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 16s{color} | {color:blue} standalone-metastore/metastore-server in master has 179 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16578/dev-support/hive-personality.sh | | git revision | master / 230db04 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: standalone-metastore/metastore-server U: standalone-metastore/metastore-server | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-16578/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add option to schematool to drop Hive databases > --- > > Key: HIVE-21034 > URL: https://issues.apache.org/jira/browse/HIVE-21034 > Project: Hive > Issue Type: Improvement >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Major > Attachments: HIVE-21034.1.patch, HIVE-21034.2.patch, > HIVE-21034.2.patch, HIVE-21034.3.patch, HIVE-21034.4.patch, HIVE-21034.5.patch > > > An option to remove all Hive managed data could be a useful addition to > {{schematool}}. > I propose to introduce a new flag {{-dropAllDatabases}} that would *drop all > databases with CASCADE* to remove all data of managed tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21474) Bumping guava version
[ https://issues.apache.org/jira/browse/HIVE-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796510#comment-16796510 ] Hive QA commented on HIVE-21474: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12962983/HIVE-21474.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 28 failed/errored test(s), 15826 tests executed *Failed tests:* {noformat} TestAccumuloCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=267) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=267) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[multi_insert_with_join2] (batchId=85) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[pcs] (batchId=54) org.apache.hive.spark.client.TestSparkClient.testAddJarsAndFiles (batchId=331) org.apache.hive.spark.client.TestSparkClient.testCounters (batchId=331) org.apache.hive.spark.client.TestSparkClient.testErrorJob (batchId=331) org.apache.hive.spark.client.TestSparkClient.testErrorJobNotSerializable (batchId=331) org.apache.hive.spark.client.TestSparkClient.testJobSubmission (batchId=331) org.apache.hive.spark.client.TestSparkClient.testMetricsCollection (batchId=331) org.apache.hive.spark.client.TestSparkClient.testSimpleSparkJob (batchId=331) org.apache.hive.spark.client.TestSparkClient.testSyncRpc (batchId=331) org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testAutoRegistration (batchId=331) org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testDecryptionOnly (batchId=331) org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testEmbeddedChannel (batchId=331) org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testEncryptDecrypt (batchId=331) org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testEncryptionOnly (batchId=331) org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testFragmentation (batchId=331) org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testKryoCodec (batchId=331) org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testMaxMessageSize (batchId=331) org.apache.hive.spark.client.rpc.TestKryoMessageCodec.testNegativeMessageSize (batchId=331) org.apache.hive.spark.client.rpc.TestRpc.testBadHello (batchId=331) org.apache.hive.spark.client.rpc.TestRpc.testClientServer (batchId=331) org.apache.hive.spark.client.rpc.TestRpc.testCloseListener (batchId=331) org.apache.hive.spark.client.rpc.TestRpc.testEncryption (batchId=331) org.apache.hive.spark.client.rpc.TestRpc.testNotDeserializableRpc (batchId=331) org.apache.hive.spark.client.rpc.TestRpc.testRpcDispatcher (batchId=331) org.apache.hive.spark.client.rpc.TestRpc.testRpcServerMultiThread (batchId=331) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/16577/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16577/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16577/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 28 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12962983 - PreCommit-HIVE-Build > Bumping guava version > - > > Key: HIVE-21474 > URL: https://issues.apache.org/jira/browse/HIVE-21474 > Project: Hive > Issue Type: Task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-21474.patch > > > Bump guava to 24.1.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21474) Bumping guava version
[ https://issues.apache.org/jira/browse/HIVE-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796506#comment-16796506 ] Hive QA commented on HIVE-21474: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 45s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 52s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 14s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 32s{color} | {color:blue} druid-handler in master has 3 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 55s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 33s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 4m 31s{color} | {color:red} ql generated 3 new + 2244 unchanged - 11 fixed = 2247 total (was 2255) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 8m 42s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 69m 53s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:ql | | | Null passed for non-null parameter of addJars(String) in org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.refreshLocalResources(SparkWork, HiveConf) Method invoked at LocalHiveSparkClient.java:of addJars(String) in org.apache.hadoop.hive.ql.exec.spark.LocalHiveSparkClient.refreshLocalResources(SparkWork, HiveConf) Method invoked at LocalHiveSparkClient.java:[line 195] | | | Null passed for non-null parameter of addJars(String) in org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.refreshLocalResources(SparkWork, HiveConf) Method invoked at RemoteHiveSparkClient.java:of addJars(String) in org.apache.hadoop.hive.ql.exec.spark.RemoteHiveSparkClient.refreshLocalResources(SparkWork, HiveConf) Method invoked at RemoteHiveSparkClient.java:[line 238] | | | Null passed for non-null parameter of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.hive.ql.exec.tez.WorkloadManager.processCurrentEvents(WorkloadManager$EventState, WorkloadManager$WmThreadSyncWork) At WorkloadManager.java:of com.google.common.util.concurrent.SettableFuture.set(Object) in org.apache.hadoop.hive.ql.exec.tez.WorkloadManager.processCurrentEvents(WorkloadManager$EventState, WorkloadManager$WmThreadSyncWork) At WorkloadManager.java:[line 733] | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16577/dev-support/hive-personality.sh | | git
[jira] [Commented] (HIVE-21473) Bumping jackson version
[ https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796451#comment-16796451 ] Hive QA commented on HIVE-21473: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12962977/HIVE-21473.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:green}SUCCESS:{color} +1 due to 15832 tests passed Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/16576/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16576/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16576/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12962977 - PreCommit-HIVE-Build > Bumping jackson version > --- > > Key: HIVE-21473 > URL: https://issues.apache.org/jira/browse/HIVE-21473 > Project: Hive > Issue Type: Task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-21473.patch > > > Bump jackson version to 2.9.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21473) Bumping jackson version
[ https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796430#comment-16796430 ] Hive QA commented on HIVE-21473: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 22s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 6s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 7m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 43m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16576/dev-support/hive-personality.sh | | git revision | master / 230db04 | | Default Java | 1.8.0_111 | | modules | C: . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-16576/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Bumping jackson version > --- > > Key: HIVE-21473 > URL: https://issues.apache.org/jira/browse/HIVE-21473 > Project: Hive > Issue Type: Task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-21473.patch > > > Bump jackson version to 2.9.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21283) Create Synonym mid for substr, position for locate
[ https://issues.apache.org/jira/browse/HIVE-21283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796371#comment-16796371 ] Hive QA commented on HIVE-21283: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12962958/HIVE.21283.09.PATCH {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15834 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_groupby_reduce] (batchId=61) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/16575/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16575/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16575/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12962958 - PreCommit-HIVE-Build > Create Synonym mid for substr, position for locate > > > Key: HIVE-21283 > URL: https://issues.apache.org/jira/browse/HIVE-21283 > Project: Hive > Issue Type: New Feature >Reporter: Mani M >Assignee: Mani M >Priority: Minor > Labels: UDF, pull-request-available, todoc4.0 > Fix For: 4.0.0 > > Attachments: HIVE.21283.03.PATCH, HIVE.21283.04.PATCH, > HIVE.21283.05.PATCH, HIVE.21283.06.PATCH, HIVE.21283.07.PATCH, > HIVE.21283.08.PATCH, HIVE.21283.09.PATCH, HIVE.21283.2.PATCH, > HIVE.21283.PATCH, image-2019-03-16-21-31-15-541.png, > image-2019-03-16-21-33-18-898.png > > Time Spent: 2.5h > Remaining Estimate: 0h > > Create new synonym for the existing function > > Mid for substr > postiion for locate -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17404) Orc split generation cache does not handle files without file tail
[ https://issues.apache.org/jira/browse/HIVE-17404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-17404: - Attachment: HIVE-17404.2.patch > Orc split generation cache does not handle files without file tail > -- > > Key: HIVE-17404 > URL: https://issues.apache.org/jira/browse/HIVE-17404 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0, 2.4.0 >Reporter: Prasanth Jayachandran >Assignee: Aditya Shah >Priority: Critical > Attachments: HIVE-17404.2.patch, HIVE-17404.patch > > > Some old files do not have Orc FileTail. If file tail does not exist, split > generation should fallback to old way of storing footers. > This can result in exceptions like below > {code} > ORC split generation failed with exception: Malformed ORC file. Invalid > postscript length 9 > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1735) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getSplits(OrcInputFormat.java:1822) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:450) > at > org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:569) > at > org.apache.hadoop.hive.ql.exec.tez.HiveSplitGenerator.initialize(HiveSplitGenerator.java:196) > at > org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:278) > at > org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable$1.run(RootInputInitializerManager.java:269) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) > at > org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:269) > at > org.apache.tez.dag.app.dag.RootInputInitializerManager$InputInitializerCallable.call(RootInputInitializerManager.java:253) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: org.apache.orc.FileFormatException: Malformed ORC file. Invalid > postscript length 9 > at org.apache.orc.impl.ReaderImpl.ensureOrcFooter(ReaderImpl.java:297) > at org.apache.orc.impl.ReaderImpl.extractFileTail(ReaderImpl.java:470) > at > org.apache.hadoop.hive.ql.io.orc.LocalCache.getAndValidate(LocalCache.java:103) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$ETLSplitStrategy.getSplits(OrcInputFormat.java:804) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$ETLSplitStrategy.runGetSplitsSync(OrcInputFormat.java:922) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat$ETLSplitStrategy.generateSplitWork(OrcInputFormat.java:891) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.scheduleSplits(OrcInputFormat.java:1763) > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.generateSplitsInfo(OrcInputFormat.java:1707) > ... 15 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21460) ACID: Load data followed by a select * query results in incorrect results
[ https://issues.apache.org/jira/browse/HIVE-21460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796350#comment-16796350 ] Gopal V commented on HIVE-21460: LGTM - +1 (backport to 3.x branch too) > ACID: Load data followed by a select * query results in incorrect results > - > > Key: HIVE-21460 > URL: https://issues.apache.org/jira/browse/HIVE-21460 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 4.0.0, 3.1.1 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Blocker > Attachments: HIVE-21460.1.patch > > > This affects current master as well. Created an orc file such that it spans > multiple stripes and ran a simple select *, and got incorrect row counts > (when comparing with select count(*). The problem seems to be that after > split generation and creating min/max rowId for each row (note that since the > loaded file is not written by Hive ACID, it does not have ROW__ID in the > file; but the ROW__ID is applied on read by discovering min/max bounds which > are used for calculating ROW__ID.rowId for each row of a split), Hive is only > reading the last split. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21283) Create Synonym mid for substr, position for locate
[ https://issues.apache.org/jira/browse/HIVE-21283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796313#comment-16796313 ] Hive QA commented on HIVE-21283: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 15s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 15s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16575/dev-support/hive-personality.sh | | git revision | master / 230db04 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-16575/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Create Synonym mid for substr, position for locate > > > Key: HIVE-21283 > URL: https://issues.apache.org/jira/browse/HIVE-21283 > Project: Hive > Issue Type: New Feature >Reporter: Mani M >Assignee: Mani M >Priority: Minor > Labels: UDF, pull-request-available, todoc4.0 > Fix For: 4.0.0 > > Attachments: HIVE.21283.03.PATCH, HIVE.21283.04.PATCH, > HIVE.21283.05.PATCH, HIVE.21283.06.PATCH, HIVE.21283.07.PATCH, > HIVE.21283.08.PATCH, HIVE.21283.09.PATCH, HIVE.21283.2.PATCH, > HIVE.21283.PATCH, image-2019-03-16-21-31-15-541.png, > image-2019-03-16-21-33-18-898.png > > Time Spent: 2.5h > Remaining Estimate: 0h > > Create new synonym for the existing function > > Mid for substr > postiion for locate -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21479) NPE during metastore cache update
[ https://issues.apache.org/jira/browse/HIVE-21479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21479: -- Attachment: HIVE-21479.1.patch > NPE during metastore cache update > - > > Key: HIVE-21479 > URL: https://issues.apache.org/jira/browse/HIVE-21479 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21479.1.patch > > > Saw the following stack during a long periodical update: > {code} > 2019-03-12T10:01:43,015 ERROR [CachedStore-CacheUpdateService: Thread-36] > cache.CachedStore: Update failure:java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.updateTableColStats(CachedStore.java:508) > at > org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.update(CachedStore.java:461) > at > org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.run(CachedStore.java:396) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {code} > The reason is we get the table list at very early stage and then refresh > table one by one. It is likely table is removed during the interim. We need > to deal with this case during cache update. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21479) NPE during metastore cache update
[ https://issues.apache.org/jira/browse/HIVE-21479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21479: -- Status: Patch Available (was: Open) > NPE during metastore cache update > - > > Key: HIVE-21479 > URL: https://issues.apache.org/jira/browse/HIVE-21479 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21479.1.patch > > > Saw the following stack during a long periodical update: > {code} > 2019-03-12T10:01:43,015 ERROR [CachedStore-CacheUpdateService: Thread-36] > cache.CachedStore: Update failure:java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.updateTableColStats(CachedStore.java:508) > at > org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.update(CachedStore.java:461) > at > org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.run(CachedStore.java:396) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {code} > The reason is we get the table list at very early stage and then refresh > table one by one. It is likely table is removed during the interim. We need > to deal with this case during cache update. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21479) NPE during metastore cache update
[ https://issues.apache.org/jira/browse/HIVE-21479?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai reassigned HIVE-21479: - > NPE during metastore cache update > - > > Key: HIVE-21479 > URL: https://issues.apache.org/jira/browse/HIVE-21479 > Project: Hive > Issue Type: Bug >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > > Saw the following stack during a long periodical update: > {code} > 2019-03-12T10:01:43,015 ERROR [CachedStore-CacheUpdateService: Thread-36] > cache.CachedStore: Update failure:java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.updateTableColStats(CachedStore.java:508) > at > org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.update(CachedStore.java:461) > at > org.apache.hadoop.hive.metastore.cache.CachedStore$CacheUpdateMasterWork.run(CachedStore.java:396) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > {code} > The reason is we get the table list at very early stage and then refresh > table one by one. It is likely table is removed during the interim. We need > to deal with this case during cache update. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21478) Metastore cache update shall capture exception
[ https://issues.apache.org/jira/browse/HIVE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai reassigned HIVE-21478: - > Metastore cache update shall capture exception > -- > > Key: HIVE-21478 > URL: https://issues.apache.org/jira/browse/HIVE-21478 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21478.1.patch > > > We definitely need to capture any exception during > CacheUpdateMasterWork.update(), otherwise, Java would refuse to schedule > future update(). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21478) Metastore cache update shall capture exception
[ https://issues.apache.org/jira/browse/HIVE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21478: -- Attachment: HIVE-21478.1.patch > Metastore cache update shall capture exception > -- > > Key: HIVE-21478 > URL: https://issues.apache.org/jira/browse/HIVE-21478 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21478.1.patch > > > We definitely need to capture any exception during > CacheUpdateMasterWork.update(), otherwise, Java would refuse to schedule > future update(). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21478) Metastore cache update shall capture exception
[ https://issues.apache.org/jira/browse/HIVE-21478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-21478: -- Status: Patch Available (was: Open) > Metastore cache update shall capture exception > -- > > Key: HIVE-21478 > URL: https://issues.apache.org/jira/browse/HIVE-21478 > Project: Hive > Issue Type: Bug > Components: Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-21478.1.patch > > > We definitely need to capture any exception during > CacheUpdateMasterWork.update(), otherwise, Java would refuse to schedule > future update(). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21455) Too verbose logging in AvroGenericRecordReader
[ https://issues.apache.org/jira/browse/HIVE-21455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HIVE-21455: -- Status: Patch Available (was: Open) > Too verbose logging in AvroGenericRecordReader > -- > > Key: HIVE-21455 > URL: https://issues.apache.org/jira/browse/HIVE-21455 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.1.0, 3.0.0, 2.1.0, 2.0.0, 1.1.0, 1.2.0 >Reporter: Miklos Szurap >Assignee: David Mollitor >Priority: Minor > Attachments: HIVE-21455.2.patch, HIVE-21455.patch > > > {{AvroGenericRecordReader}} logs the Avro schema for each datafile. It is too > verbose, likely we don't need to log that on INFO level. > For example a table: > {noformat} > create table avro_tbl (c1 string, c2 int, c3 float) stored as avro; > {noformat} > and querying it with a select star - with 3 datafiles HiveServer2 logs the > following: > {noformat} > 2019-03-15 09:18:35,999 INFO org.apache.hadoop.mapred.FileInputFormat: > [HiveServer2-Handler-Pool: Thread-64]: Total input paths to process : 3 > 2019-03-15 09:18:35,999 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > 2019-03-15 09:18:36,004 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > 2019-03-15 09:18:36,010 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > {noformat} > This has a huge performance and storage penalty on a table with big schema > and thousands of datafiles. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21455) Too verbose logging in AvroGenericRecordReader
[ https://issues.apache.org/jira/browse/HIVE-21455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HIVE-21455: -- Attachment: HIVE-21455.2.patch > Too verbose logging in AvroGenericRecordReader > -- > > Key: HIVE-21455 > URL: https://issues.apache.org/jira/browse/HIVE-21455 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 1.2.0, 1.1.0, 2.0.0, 2.1.0, 3.0.0, 3.1.0 >Reporter: Miklos Szurap >Assignee: David Mollitor >Priority: Minor > Attachments: HIVE-21455.2.patch, HIVE-21455.patch > > > {{AvroGenericRecordReader}} logs the Avro schema for each datafile. It is too > verbose, likely we don't need to log that on INFO level. > For example a table: > {noformat} > create table avro_tbl (c1 string, c2 int, c3 float) stored as avro; > {noformat} > and querying it with a select star - with 3 datafiles HiveServer2 logs the > following: > {noformat} > 2019-03-15 09:18:35,999 INFO org.apache.hadoop.mapred.FileInputFormat: > [HiveServer2-Handler-Pool: Thread-64]: Total input paths to process : 3 > 2019-03-15 09:18:35,999 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > 2019-03-15 09:18:36,004 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > 2019-03-15 09:18:36,010 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > {noformat} > This has a huge performance and storage penalty on a table with big schema > and thousands of datafiles. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21455) Too verbose logging in AvroGenericRecordReader
[ https://issues.apache.org/jira/browse/HIVE-21455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor reassigned HIVE-21455: - Assignee: Miklos Szurap (was: David Mollitor) > Too verbose logging in AvroGenericRecordReader > -- > > Key: HIVE-21455 > URL: https://issues.apache.org/jira/browse/HIVE-21455 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 1.2.0, 1.1.0, 2.0.0, 2.1.0, 3.0.0, 3.1.0 >Reporter: Miklos Szurap >Assignee: Miklos Szurap >Priority: Minor > Attachments: HIVE-21455.2.patch, HIVE-21455.patch > > > {{AvroGenericRecordReader}} logs the Avro schema for each datafile. It is too > verbose, likely we don't need to log that on INFO level. > For example a table: > {noformat} > create table avro_tbl (c1 string, c2 int, c3 float) stored as avro; > {noformat} > and querying it with a select star - with 3 datafiles HiveServer2 logs the > following: > {noformat} > 2019-03-15 09:18:35,999 INFO org.apache.hadoop.mapred.FileInputFormat: > [HiveServer2-Handler-Pool: Thread-64]: Total input paths to process : 3 > 2019-03-15 09:18:35,999 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > 2019-03-15 09:18:36,004 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > 2019-03-15 09:18:36,010 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > {noformat} > This has a huge performance and storage penalty on a table with big schema > and thousands of datafiles. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21455) Too verbose logging in AvroGenericRecordReader
[ https://issues.apache.org/jira/browse/HIVE-21455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HIVE-21455: -- Status: Open (was: Patch Available) > Too verbose logging in AvroGenericRecordReader > -- > > Key: HIVE-21455 > URL: https://issues.apache.org/jira/browse/HIVE-21455 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.1.0, 3.0.0, 2.1.0, 2.0.0, 1.1.0, 1.2.0 >Reporter: Miklos Szurap >Assignee: Miklos Szurap >Priority: Minor > Attachments: HIVE-21455.patch > > > {{AvroGenericRecordReader}} logs the Avro schema for each datafile. It is too > verbose, likely we don't need to log that on INFO level. > For example a table: > {noformat} > create table avro_tbl (c1 string, c2 int, c3 float) stored as avro; > {noformat} > and querying it with a select star - with 3 datafiles HiveServer2 logs the > following: > {noformat} > 2019-03-15 09:18:35,999 INFO org.apache.hadoop.mapred.FileInputFormat: > [HiveServer2-Handler-Pool: Thread-64]: Total input paths to process : 3 > 2019-03-15 09:18:35,999 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > 2019-03-15 09:18:36,004 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > 2019-03-15 09:18:36,010 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > {noformat} > This has a huge performance and storage penalty on a table with big schema > and thousands of datafiles. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21455) Too verbose logging in AvroGenericRecordReader
[ https://issues.apache.org/jira/browse/HIVE-21455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor reassigned HIVE-21455: - Assignee: David Mollitor (was: Miklos Szurap) > Too verbose logging in AvroGenericRecordReader > -- > > Key: HIVE-21455 > URL: https://issues.apache.org/jira/browse/HIVE-21455 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 1.2.0, 1.1.0, 2.0.0, 2.1.0, 3.0.0, 3.1.0 >Reporter: Miklos Szurap >Assignee: David Mollitor >Priority: Minor > Attachments: HIVE-21455.patch > > > {{AvroGenericRecordReader}} logs the Avro schema for each datafile. It is too > verbose, likely we don't need to log that on INFO level. > For example a table: > {noformat} > create table avro_tbl (c1 string, c2 int, c3 float) stored as avro; > {noformat} > and querying it with a select star - with 3 datafiles HiveServer2 logs the > following: > {noformat} > 2019-03-15 09:18:35,999 INFO org.apache.hadoop.mapred.FileInputFormat: > [HiveServer2-Handler-Pool: Thread-64]: Total input paths to process : 3 > 2019-03-15 09:18:35,999 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > 2019-03-15 09:18:36,004 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > 2019-03-15 09:18:36,010 INFO > org.apache.hadoop.hive.ql.io.avro.AvroGenericRecordReader: > [HiveServer2-Handler-Pool: Thread-64]: Found the avro schema in the job: > {"type":"record","name":"avro_tbl","namespace":"test","fields":[{"name":"c1","type":["null","string"],"default":null},{"name":"c2","type":["null","int"],"default":null},{"name":"c3","type":["null","float"],"default":null}]} > {noformat} > This has a huge performance and storage penalty on a table with big schema > and thousands of datafiles. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21422) Add metrics to LRFU cache policy
[ https://issues.apache.org/jira/browse/HIVE-21422?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796256#comment-16796256 ] Oliver Draese commented on HIVE-21422: -- Added a patch version (2) which removes all checkstyle warnings. > Add metrics to LRFU cache policy > > > Key: HIVE-21422 > URL: https://issues.apache.org/jira/browse/HIVE-21422 > Project: Hive > Issue Type: Improvement > Components: llap >Affects Versions: 4.0.0 >Reporter: Oliver Draese >Assignee: Oliver Draese >Priority: Major > Labels: llap > Fix For: 4.0.0 > > Attachments: HIVE-21422.1.patch, HIVE-21422.2.patch, HIVE-21422.patch > > > The LRFU cache policy for the LLAP data cache doesn't provide enough insight > to figure out, what is cached and why something might get evicted. This > ticket is used to add Hadoop metrics 2 information (accessible via JMX) to > the LRFU policy, providing following information: > * How much memory is cached for data buffers > * How much memory is cached for meta data buffers > * How large is the min-heap of the cache policy > * How long is the eviction short list (linked list) > * How much memory is currently "locked" (buffers with positive reference > count) and therefore in use by a query > These new counters are found in the MX bean, following this path: > Hadoop/LlapDaemon/LowLevelLrfuCachePolicy- > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan
[ https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796269#comment-16796269 ] Hive QA commented on HIVE-21304: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12962955/HIVE-21304.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 241 failed/errored test(s), 15833 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_11] (batchId=275) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_12] (batchId=275) org.apache.hadoop.hive.cli.TestBeeLineDriver.testCliDriver[smb_mapjoin_13] (batchId=275) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_dynamic_partitions] (batchId=278) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_table] (batchId=278) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions] (batchId=278) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_table] (batchId=278) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[write_final_output_blobstore] (batchId=278) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=57) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_5a] (batchId=58) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_8] (batchId=15) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_reordering_values] (batchId=6) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[binary_output_format] (batchId=95) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket1] (batchId=45) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket2] (batchId=55) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket3] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_1] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_2] (batchId=63) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark1] (batchId=74) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark2] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark3] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_1] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_2] (batchId=71) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_3] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_4] (batchId=44) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_5] (batchId=25) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_6] (batchId=91) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_7] (batchId=41) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_8] (batchId=40) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin10] (batchId=56) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin11] (batchId=77) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin12] (batchId=38) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin13] (batchId=43) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin5] (batchId=92) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin8] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin9] (batchId=18) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin_negative2] (batchId=75) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketmapjoin_negative] (batchId=25) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cbo_rp_outer_join_ppr] (batchId=8) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[columnstats_partlvl] (batchId=38) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[columnstats_tbllvl] (batchId=9) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[comments] (batchId=40) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[constantPropagateForSubQuery] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[cp_sel] (batchId=67) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[display_colstats_tbllvl] (batchId=4) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_aggr] (batchId=90) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_join_breaktask] (batchId=81) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[filter_union]
[jira] [Updated] (HIVE-21466) Increase Default Size of SPLIT_MAXSIZE
[ https://issues.apache.org/jira/browse/HIVE-21466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HIVE-21466: -- Status: Patch Available (was: Open) > Increase Default Size of SPLIT_MAXSIZE > -- > > Key: HIVE-21466 > URL: https://issues.apache.org/jira/browse/HIVE-21466 > Project: Hive > Issue Type: Improvement > Components: Configuration >Affects Versions: 4.0.0, 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HIVE-21466.1.patch, HIVE-21466.2.patch > > > {code:java} > MAPREDMAXSPLITSIZE(FileInputFormat.SPLIT_MAXSIZE, 25600L, "", true), > {code} > [https://github.com/apache/hive/blob/8d4300a02691777fc96f33861ed27e64fed72f2c/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java#L682] > This field specifies a maximum size for each MR (maybe other?) splits. > This number should be a multiple of the HDFS Block size. The way that this > maximum is implemented, is that each block is added to the split, and if the > split grows to be larger than the maximum allowed, the split is submitted to > the cluster and a new split is opened. > So, imagine the following scenario: > * HDFS block size of 16 bytes > * Maximum size of 40 bytes > This will produce a split with 3 blocks. (2x16) = 32; another block will be > inserted, (3x16) = 48 bytes in the split. So, while many operators would > assume a split of 2 blocks, the actual is 3 blocks. Setting the maximum split > size to a multiple of the HDFS block size will make this behavior less > confusing. > The current setting is ~256MB and when this was introduced, the default HDFS > block size was 64MB. That is a factor of 4x. However, now HDFS block sizes > are 128MB by default, so I propose setting this to 4x128MB. The larger > splits (fewer tasks) should give a nice performance boost for modern hardware. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21466) Increase Default Size of SPLIT_MAXSIZE
[ https://issues.apache.org/jira/browse/HIVE-21466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HIVE-21466: -- Status: Open (was: Patch Available) > Increase Default Size of SPLIT_MAXSIZE > -- > > Key: HIVE-21466 > URL: https://issues.apache.org/jira/browse/HIVE-21466 > Project: Hive > Issue Type: Improvement > Components: Configuration >Affects Versions: 4.0.0, 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HIVE-21466.1.patch, HIVE-21466.2.patch > > > {code:java} > MAPREDMAXSPLITSIZE(FileInputFormat.SPLIT_MAXSIZE, 25600L, "", true), > {code} > [https://github.com/apache/hive/blob/8d4300a02691777fc96f33861ed27e64fed72f2c/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java#L682] > This field specifies a maximum size for each MR (maybe other?) splits. > This number should be a multiple of the HDFS Block size. The way that this > maximum is implemented, is that each block is added to the split, and if the > split grows to be larger than the maximum allowed, the split is submitted to > the cluster and a new split is opened. > So, imagine the following scenario: > * HDFS block size of 16 bytes > * Maximum size of 40 bytes > This will produce a split with 3 blocks. (2x16) = 32; another block will be > inserted, (3x16) = 48 bytes in the split. So, while many operators would > assume a split of 2 blocks, the actual is 3 blocks. Setting the maximum split > size to a multiple of the HDFS block size will make this behavior less > confusing. > The current setting is ~256MB and when this was introduced, the default HDFS > block size was 64MB. That is a factor of 4x. However, now HDFS block sizes > are 128MB by default, so I propose setting this to 4x128MB. The larger > splits (fewer tasks) should give a nice performance boost for modern hardware. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21466) Increase Default Size of SPLIT_MAXSIZE
[ https://issues.apache.org/jira/browse/HIVE-21466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HIVE-21466: -- Attachment: (was: HIVE-21466.1.patch) > Increase Default Size of SPLIT_MAXSIZE > -- > > Key: HIVE-21466 > URL: https://issues.apache.org/jira/browse/HIVE-21466 > Project: Hive > Issue Type: Improvement > Components: Configuration >Affects Versions: 4.0.0, 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HIVE-21466.1.patch, HIVE-21466.2.patch > > > {code:java} > MAPREDMAXSPLITSIZE(FileInputFormat.SPLIT_MAXSIZE, 25600L, "", true), > {code} > [https://github.com/apache/hive/blob/8d4300a02691777fc96f33861ed27e64fed72f2c/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java#L682] > This field specifies a maximum size for each MR (maybe other?) splits. > This number should be a multiple of the HDFS Block size. The way that this > maximum is implemented, is that each block is added to the split, and if the > split grows to be larger than the maximum allowed, the split is submitted to > the cluster and a new split is opened. > So, imagine the following scenario: > * HDFS block size of 16 bytes > * Maximum size of 40 bytes > This will produce a split with 3 blocks. (2x16) = 32; another block will be > inserted, (3x16) = 48 bytes in the split. So, while many operators would > assume a split of 2 blocks, the actual is 3 blocks. Setting the maximum split > size to a multiple of the HDFS block size will make this behavior less > confusing. > The current setting is ~256MB and when this was introduced, the default HDFS > block size was 64MB. That is a factor of 4x. However, now HDFS block sizes > are 128MB by default, so I propose setting this to 4x128MB. The larger > splits (fewer tasks) should give a nice performance boost for modern hardware. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21466) Increase Default Size of SPLIT_MAXSIZE
[ https://issues.apache.org/jira/browse/HIVE-21466?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HIVE-21466: -- Attachment: HIVE-21466.2.patch > Increase Default Size of SPLIT_MAXSIZE > -- > > Key: HIVE-21466 > URL: https://issues.apache.org/jira/browse/HIVE-21466 > Project: Hive > Issue Type: Improvement > Components: Configuration >Affects Versions: 4.0.0, 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HIVE-21466.1.patch, HIVE-21466.2.patch > > > {code:java} > MAPREDMAXSPLITSIZE(FileInputFormat.SPLIT_MAXSIZE, 25600L, "", true), > {code} > [https://github.com/apache/hive/blob/8d4300a02691777fc96f33861ed27e64fed72f2c/common/src/java/org/apache/hadoop/hive/conf/HiveConf.java#L682] > This field specifies a maximum size for each MR (maybe other?) splits. > This number should be a multiple of the HDFS Block size. The way that this > maximum is implemented, is that each block is added to the split, and if the > split grows to be larger than the maximum allowed, the split is submitted to > the cluster and a new split is opened. > So, imagine the following scenario: > * HDFS block size of 16 bytes > * Maximum size of 40 bytes > This will produce a split with 3 blocks. (2x16) = 32; another block will be > inserted, (3x16) = 48 bytes in the split. So, while many operators would > assume a split of 2 blocks, the actual is 3 blocks. Setting the maximum split > size to a multiple of the HDFS block size will make this behavior less > confusing. > The current setting is ~256MB and when this was introduced, the default HDFS > block size was 64MB. That is a factor of 4x. However, now HDFS block sizes > are 128MB by default, so I propose setting this to 4x128MB. The larger > splits (fewer tasks) should give a nice performance boost for modern hardware. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21422) Add metrics to LRFU cache policy
[ https://issues.apache.org/jira/browse/HIVE-21422?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Oliver Draese updated HIVE-21422: - Attachment: HIVE-21422.2.patch > Add metrics to LRFU cache policy > > > Key: HIVE-21422 > URL: https://issues.apache.org/jira/browse/HIVE-21422 > Project: Hive > Issue Type: Improvement > Components: llap >Affects Versions: 4.0.0 >Reporter: Oliver Draese >Assignee: Oliver Draese >Priority: Major > Labels: llap > Fix For: 4.0.0 > > Attachments: HIVE-21422.1.patch, HIVE-21422.2.patch, HIVE-21422.patch > > > The LRFU cache policy for the LLAP data cache doesn't provide enough insight > to figure out, what is cached and why something might get evicted. This > ticket is used to add Hadoop metrics 2 information (accessible via JMX) to > the LRFU policy, providing following information: > * How much memory is cached for data buffers > * How much memory is cached for meta data buffers > * How large is the min-heap of the cache policy > * How long is the eviction short list (linked list) > * How much memory is currently "locked" (buffers with positive reference > count) and therefore in use by a query > These new counters are found in the MX bean, following this path: > Hadoop/LlapDaemon/LowLevelLrfuCachePolicy- > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21469) Review of ZooKeeperHiveLockManager
[ https://issues.apache.org/jira/browse/HIVE-21469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HIVE-21469: -- Status: Patch Available (was: Open) > Review of ZooKeeperHiveLockManager > -- > > Key: HIVE-21469 > URL: https://issues.apache.org/jira/browse/HIVE-21469 > Project: Hive > Issue Type: Improvement > Components: Locking >Affects Versions: 4.0.0, 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HIVE-21469.1.patch, HIVE-21469.2.patch > > > A lot of sins in this class to resolve: > {code:java} > @Override > public void setContext(HiveLockManagerCtx ctx) throws LockException { > try { > curatorFramework = CuratorFrameworkSingleton.getInstance(conf); > parent = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_NAMESPACE); > try{ > curatorFramework.create().withMode(CreateMode.PERSISTENT).forPath("/" > + parent, new byte[0]); > } catch (Exception e) { > // ignore if the parent already exists > if (!(e instanceof KeeperException) || ((KeeperException)e).code() != > KeeperException.Code.NODEEXISTS) { > LOG.warn("Unexpected ZK exception when creating parent node /" + > parent, e); > } > } > {code} > Every time a new session is created and this {{setContext}} method is called, > it attempts to create the root node. I have seen that, even though the root > node exists, an create node action is written to the ZK logs. Check first if > the node exists before trying to create it. > {code:java} > try { > curatorFramework.delete().forPath(zLock.getPath()); > } catch (InterruptedException ie) { > curatorFramework.delete().forPath(zLock.getPath()); > } > {code} > There has historically been a quite a few bugs regarding leaked locks. The > Driver will signal the session {{Thread}} by performing an interrupt. That > interrupt can happen any time and it can kill a create/delete action within > the ZK framework. We can see one example of workaround for this. If the ZK > action is interrupted, simply do it again. Well, what if it's interrupted > yet again? The lock will be leaked. Also, when the {{InterruptedException}} > is caught in the try block, the thread's interrupted flag is cleared. The > flag is not reset in this code and therefore we lose the fact that this > thread has been interrupted. This flag should be preserved so that other > code paths will know that it's time to exit. > {code:java} > if (tryNum > 1) { > Thread.sleep(sleepTime); > } > unlockPrimitive(hiveLock, parent, curatorFramework); > break; > } catch (Exception e) { > if (tryNum >= numRetriesForUnLock) { > String name = ((ZooKeeperHiveLock)hiveLock).getPath(); > throw new LockException("Node " + name + " can not be deleted after > " + numRetriesForUnLock + " attempts.", > e); > } > } > {code} > ... related... the sleep here may be interrupted, but we still need to delete > the lock (again, for fear of leaking it). This sleep should be > uninterruptible. If we need to get the lock deleted, and there's a problem, > interrupting the sleep will cause the code to eventually exit and locks will > be leaked. > It also requires a bunch more TLC. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21469) Review of ZooKeeperHiveLockManager
[ https://issues.apache.org/jira/browse/HIVE-21469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HIVE-21469: -- Attachment: HIVE-21469.2.patch > Review of ZooKeeperHiveLockManager > -- > > Key: HIVE-21469 > URL: https://issues.apache.org/jira/browse/HIVE-21469 > Project: Hive > Issue Type: Improvement > Components: Locking >Affects Versions: 4.0.0, 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HIVE-21469.1.patch, HIVE-21469.2.patch > > > A lot of sins in this class to resolve: > {code:java} > @Override > public void setContext(HiveLockManagerCtx ctx) throws LockException { > try { > curatorFramework = CuratorFrameworkSingleton.getInstance(conf); > parent = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_NAMESPACE); > try{ > curatorFramework.create().withMode(CreateMode.PERSISTENT).forPath("/" > + parent, new byte[0]); > } catch (Exception e) { > // ignore if the parent already exists > if (!(e instanceof KeeperException) || ((KeeperException)e).code() != > KeeperException.Code.NODEEXISTS) { > LOG.warn("Unexpected ZK exception when creating parent node /" + > parent, e); > } > } > {code} > Every time a new session is created and this {{setContext}} method is called, > it attempts to create the root node. I have seen that, even though the root > node exists, an create node action is written to the ZK logs. Check first if > the node exists before trying to create it. > {code:java} > try { > curatorFramework.delete().forPath(zLock.getPath()); > } catch (InterruptedException ie) { > curatorFramework.delete().forPath(zLock.getPath()); > } > {code} > There has historically been a quite a few bugs regarding leaked locks. The > Driver will signal the session {{Thread}} by performing an interrupt. That > interrupt can happen any time and it can kill a create/delete action within > the ZK framework. We can see one example of workaround for this. If the ZK > action is interrupted, simply do it again. Well, what if it's interrupted > yet again? The lock will be leaked. Also, when the {{InterruptedException}} > is caught in the try block, the thread's interrupted flag is cleared. The > flag is not reset in this code and therefore we lose the fact that this > thread has been interrupted. This flag should be preserved so that other > code paths will know that it's time to exit. > {code:java} > if (tryNum > 1) { > Thread.sleep(sleepTime); > } > unlockPrimitive(hiveLock, parent, curatorFramework); > break; > } catch (Exception e) { > if (tryNum >= numRetriesForUnLock) { > String name = ((ZooKeeperHiveLock)hiveLock).getPath(); > throw new LockException("Node " + name + " can not be deleted after > " + numRetriesForUnLock + " attempts.", > e); > } > } > {code} > ... related... the sleep here may be interrupted, but we still need to delete > the lock (again, for fear of leaking it). This sleep should be > uninterruptible. If we need to get the lock deleted, and there's a problem, > interrupting the sleep will cause the code to eventually exit and locks will > be leaked. > It also requires a bunch more TLC. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21468) Case sensitivity in identifier names for JDBC storage handler
[ https://issues.apache.org/jira/browse/HIVE-21468?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-21468: --- Attachment: HIVE-21468.02.patch > Case sensitivity in identifier names for JDBC storage handler > - > > Key: HIVE-21468 > URL: https://issues.apache.org/jira/browse/HIVE-21468 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-21468.01.patch, HIVE-21468.02.patch, > HIVE-21468.patch > > > Currently, when Calcite generates the SQL query for the JDBC storage handler, > it will ignore capitalization for the identifiers names, which can lead to > errors at execution time (though the query is properly generated). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan
[ https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796230#comment-16796230 ] Hive QA commented on HIVE-21304: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 44s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 29s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 49s{color} | {color:red} ql: The patch generated 2 new + 989 unchanged - 3 fixed = 991 total (was 992) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 26m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16574/dev-support/hive-personality.sh | | git revision | master / 36bd89d | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-16574/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-16574/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Show Bucketing version for ReduceSinkOp in explain extended plan > > > Key: HIVE-21304 > URL: https://issues.apache.org/jira/browse/HIVE-21304 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-21304.01.patch > > > Show Bucketing version for ReduceSinkOp in explain extended plan. > This helps identify what hashing algorithm is being used by by ReduceSinkOp. > > cc [~vgarg] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16924) Support distinct in presence of Group By
[ https://issues.apache.org/jira/browse/HIVE-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796214#comment-16796214 ] Ashutosh Chauhan commented on HIVE-16924: - please remove ql/src/test/queries/clientnegative/udaf_invalid_place.q too, its not needed anymore. > Support distinct in presence of Group By > - > > Key: HIVE-16924 > URL: https://issues.apache.org/jira/browse/HIVE-16924 > Project: Hive > Issue Type: New Feature > Components: Query Planning >Reporter: Carter Shanklin >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-16924.01.patch, HIVE-16924.02.patch, > HIVE-16924.03.patch, HIVE-16924.04.patch, HIVE-16924.05.patch, > HIVE-16924.06.patch, HIVE-16924.07.patch, HIVE-16924.08.patch, > HIVE-16924.09.patch, HIVE-16924.10.patch, HIVE-16924.11.patch, > HIVE-16924.12.patch, HIVE-16924.13.patch, HIVE-16924.14.patch, > HIVE-16924.15.patch, HIVE-16924.16.patch, HIVE-16924.17.patch, > HIVE-16924.18.patch, HIVE-16924.19.patch, HIVE-16924.20.patch, > HIVE-16924.21.patch, HIVE-16924.22.patch, HIVE-16924.23.patch, > HIVE-16924.24.patch, HIVE-16924.25.patch, HIVE-16924.26.patch, > HIVE-16924.27.patch > > Time Spent: 4h 20m > Remaining Estimate: 0h > > {code:sql} > create table e011_01 (c1 int, c2 smallint); > insert into e011_01 values (1, 1), (2, 2); > {code} > These queries should work: > {code:sql} > select distinct c1, count(*) from e011_01 group by c1; > select distinct c1, avg(c2) from e011_01 group by c1; > {code} > Currently, you get : > FAILED: SemanticException 1:52 SELECT DISTINCT and GROUP BY can not be in the > same query. Error encountered near token 'c1' -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21386) Extend the fetch task enhancement done in HIVE-21279 to make it work with query result cache
[ https://issues.apache.org/jira/browse/HIVE-21386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-21386: --- Status: Patch Available (was: Open) > Extend the fetch task enhancement done in HIVE-21279 to make it work with > query result cache > > > Key: HIVE-21386 > URL: https://issues.apache.org/jira/browse/HIVE-21386 > Project: Hive > Issue Type: Improvement >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Attachments: HIVE-21386.1.patch > > > The improvement done in HIVE-21279 is disabled for query cache. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan
[ https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796213#comment-16796213 ] Vineet Garg commented on HIVE-21304: Curious on why is {{bucketingVersion}} part of {{Operator/OperatorDesc}}. Since it is RS specific shouldn't it be part of only {{ReduceSinkDesc}}? > Show Bucketing version for ReduceSinkOp in explain extended plan > > > Key: HIVE-21304 > URL: https://issues.apache.org/jira/browse/HIVE-21304 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-21304.01.patch > > > Show Bucketing version for ReduceSinkOp in explain extended plan. > This helps identify what hashing algorithm is being used by by ReduceSinkOp. > > cc [~vgarg] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21455) Too verbose logging in AvroGenericRecordReader
[ https://issues.apache.org/jira/browse/HIVE-21455?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796200#comment-16796200 ] Hive QA commented on HIVE-21455: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12962836/HIVE-21455.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/16573/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16573/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16573/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2019-03-19 15:28:47.035 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-16573/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2019-03-19 15:28:47.038 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 36bd89d HIVE-16924 : Support distinct in presence of Group By (Miklos Gergely via Zoltan Haindrich) + git clean -f -d Removing standalone-metastore/metastore-server/src/gen/ + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 36bd89d HIVE-16924 : Support distinct in presence of Group By (Miklos Gergely via Zoltan Haindrich) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2019-03-19 15:28:47.700 + rm -rf ../yetus_PreCommit-HIVE-Build-16573 + mkdir ../yetus_PreCommit-HIVE-Build-16573 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-16573 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-16573/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java: does not exist in index error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java:134 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java' with conflicts. Going to apply patch with: git apply -p1 error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java:134 Falling back to three-way merge... Applied patch to 'ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java' with conflicts. U ql/src/java/org/apache/hadoop/hive/ql/io/avro/AvroGenericRecordReader.java + result=1 + '[' 1 -ne 0 ']' + rm -rf yetus_PreCommit-HIVE-Build-16573 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12962836 - PreCommit-HIVE-Build > Too verbose logging in AvroGenericRecordReader > -- > > Key: HIVE-21455 > URL: https://issues.apache.org/jira/browse/HIVE-21455 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 1.2.0, 1.1.0, 2.0.0, 2.1.0, 3.0.0, 3.1.0 >Reporter: Miklos Szurap >Assignee: Miklos Szurap >Priority: Minor > Attachments: HIVE-21455.patch > > > {{AvroGenericRecordReader}} logs the Avro schema for each datafile. It is too > verbose, likely we don't need to log that on INFO level. > For example a table: > {noformat} > create table avro_tbl (c1 string, c2 int, c3 float) stored as avro; > {noformat} > and querying it with a select star - with 3 datafiles HiveServer2 logs the > following: > {noformat} > 2019-03-15 09:18:35,999 INFO org.apache.hadoop.mapred.FileInputFormat: > [HiveServer2-Handler-Pool: Thread-64]: Total input paths to process : 3 > 2019-03-15 09:18:35,999
[jira] [Commented] (HIVE-16924) Support distinct in presence of Group By
[ https://issues.apache.org/jira/browse/HIVE-16924?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796194#comment-16796194 ] Jesus Camacho Rodriguez commented on HIVE-16924: [~ashutoshc], [~mgergely], this patch deleted {{ql/src/test/results/clientnegative/udaf_invalid_place.q.out}}, which causes test to fail. If this was intentional, we need to delete {{ql/src/test/queries/clientnegative/udaf_invalid_place.q}} too. If it was not, we need to regenerate the {{q.out}} file. Could you push an addendum or let me know? Thanks > Support distinct in presence of Group By > - > > Key: HIVE-16924 > URL: https://issues.apache.org/jira/browse/HIVE-16924 > Project: Hive > Issue Type: New Feature > Components: Query Planning >Reporter: Carter Shanklin >Assignee: Miklos Gergely >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-16924.01.patch, HIVE-16924.02.patch, > HIVE-16924.03.patch, HIVE-16924.04.patch, HIVE-16924.05.patch, > HIVE-16924.06.patch, HIVE-16924.07.patch, HIVE-16924.08.patch, > HIVE-16924.09.patch, HIVE-16924.10.patch, HIVE-16924.11.patch, > HIVE-16924.12.patch, HIVE-16924.13.patch, HIVE-16924.14.patch, > HIVE-16924.15.patch, HIVE-16924.16.patch, HIVE-16924.17.patch, > HIVE-16924.18.patch, HIVE-16924.19.patch, HIVE-16924.20.patch, > HIVE-16924.21.patch, HIVE-16924.22.patch, HIVE-16924.23.patch, > HIVE-16924.24.patch, HIVE-16924.25.patch, HIVE-16924.26.patch, > HIVE-16924.27.patch > > Time Spent: 4h 20m > Remaining Estimate: 0h > > {code:sql} > create table e011_01 (c1 int, c2 smallint); > insert into e011_01 values (1, 1), (2, 2); > {code} > These queries should work: > {code:sql} > select distinct c1, count(*) from e011_01 group by c1; > select distinct c1, avg(c2) from e011_01 group by c1; > {code} > Currently, you get : > FAILED: SemanticException 1:52 SELECT DISTINCT and GROUP BY can not be in the > same query. Error encountered near token 'c1' -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21446) Hive Server going OOM during hive external table replications
[ https://issues.apache.org/jira/browse/HIVE-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796189#comment-16796189 ] Hive QA commented on HIVE-21446: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12962948/HIVE-21446.03.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 15814 tests executed *Failed tests:* {noformat} TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=275) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=275) TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=275) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udaf_invalid_place] (batchId=99) org.apache.hive.service.cli.session.TestSessionManagerMetrics.testAbandonedSessionMetrics (batchId=234) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/16572/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16572/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16572/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12962948 - PreCommit-HIVE-Build > Hive Server going OOM during hive external table replications > - > > Key: HIVE-21446 > URL: https://issues.apache.org/jira/browse/HIVE-21446 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21446.01.patch, HIVE-21446.02.patch, > HIVE-21446.03.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > > The file system objects opened using proxy users are not closed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan
[ https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796181#comment-16796181 ] Deepak Jaiswal commented on HIVE-21304: --- [~kgyrtkirk] thanks for taking this up. I don't mind. I looked at the patch. Hope it runs clean. For some reason, I don't remember why I put this in the Operator instead of Conf. I do remember putting it in conf and facing problems due to it. Please let me know if you get clean run for a code review. > Show Bucketing version for ReduceSinkOp in explain extended plan > > > Key: HIVE-21304 > URL: https://issues.apache.org/jira/browse/HIVE-21304 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-21304.01.patch > > > Show Bucketing version for ReduceSinkOp in explain extended plan. > This helps identify what hashing algorithm is being used by by ReduceSinkOp. > > cc [~vgarg] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-21475) SparkClientUtilities::urlFromPathString should handle viewfs to avoid UDF ClassNotFoundExcpetion
[ https://issues.apache.org/jira/browse/HIVE-21475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan resolved HIVE-21475. - Resolution: Duplicate Already fixed in master. Closing this as duplicate of HIVE-16292. > SparkClientUtilities::urlFromPathString should handle viewfs to avoid UDF > ClassNotFoundExcpetion > > > Key: HIVE-21475 > URL: https://issues.apache.org/jira/browse/HIVE-21475 > Project: Hive > Issue Type: Bug >Reporter: Rajesh Balamohan >Priority: Trivial > > e.g error > {noformat} > childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator) > aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork) > invertedWorkGraph (org.apache.hadoop.hive.ql.plan.SparkWork) > at > org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156) > at > org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133) > at > org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClass(SerializationUtilities.java:181) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:118) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551) > at > org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:176) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:161) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39) > at > org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:214) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551) > at > org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:176) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:134) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:40) > at > org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:214) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551) > at > org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:176) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:161) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39) > at > org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:214) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551) > at > org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:176) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:153) > at > org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39) > at > org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708) > at > org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:214) > at >
[jira] [Updated] (HIVE-21475) SparkClientUtilities::urlFromPathString should handle viewfs to avoid UDF ClassNotFoundExcpetion
[ https://issues.apache.org/jira/browse/HIVE-21475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rajesh Balamohan updated HIVE-21475: Description: e.g error {noformat} childOperators (org.apache.hadoop.hive.ql.exec.TableScanOperator) aliasToWork (org.apache.hadoop.hive.ql.plan.MapWork) invertedWorkGraph (org.apache.hadoop.hive.ql.plan.SparkWork) at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:156) at org.apache.hive.com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:133) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:670) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClass(SerializationUtilities.java:181) at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:118) at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:176) at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:161) at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:214) at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125) at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:176) at org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:134) at org.apache.hive.com.esotericsoftware.kryo.serializers.CollectionSerializer.read(CollectionSerializer.java:40) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:214) at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125) at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:176) at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:161) at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:214) at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125) at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:790) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readClassAndObject(SerializationUtilities.java:176) at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:153) at org.apache.hive.com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:39) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:708) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:214) at org.apache.hive.com.esotericsoftware.kryo.serializers.ObjectField.read(ObjectField.java:125) at org.apache.hive.com.esotericsoftware.kryo.serializers.FieldSerializer.read(FieldSerializer.java:551) at org.apache.hive.com.esotericsoftware.kryo.Kryo.readObject(Kryo.java:686) at org.apache.hadoop.hive.ql.exec.SerializationUtilities$KryoWithHooks.readObject(SerializationUtilities.java:206) at org.apache.hadoop.hive.ql.exec.spark.KryoSerializer.deserialize(KryoSerializer.java:60) {noformat} > SparkClientUtilities::urlFromPathString should handle viewfs to avoid UDF >
[jira] [Commented] (HIVE-21446) Hive Server going OOM during hive external table replications
[ https://issues.apache.org/jira/browse/HIVE-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796145#comment-16796145 ] Hive QA commented on HIVE-21446: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 38s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 19s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 19s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 18s{color} | {color:blue} shims/common in master has 6 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 25s{color} | {color:blue} shims/0.23 in master has 7 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 37s{color} | {color:blue} common in master has 63 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 10s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} shims/common: The patch generated 0 new + 94 unchanged - 1 fixed = 94 total (was 95) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} The patch 0.23 passed checkstyle {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} The patch common passed checkstyle {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s{color} | {color:red} ql: The patch generated 2 new + 19 unchanged - 2 fixed = 21 total (was 21) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 34m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16572/dev-support/hive-personality.sh | | git revision | master / 36bd89d | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-16572/yetus/diff-checkstyle-ql.txt | | modules | C: shims/common shims/0.23 common ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-16572/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Hive Server going OOM during hive external table replications > - > > Key: HIVE-21446 > URL: https://issues.apache.org/jira/browse/HIVE-21446 >
[jira] [Commented] (HIVE-21474) Bumping guava version
[ https://issues.apache.org/jira/browse/HIVE-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796128#comment-16796128 ] Zoltan Haindrich commented on HIVE-21474: - +1 pending tests > Bumping guava version > - > > Key: HIVE-21474 > URL: https://issues.apache.org/jira/browse/HIVE-21474 > Project: Hive > Issue Type: Task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-21474.patch > > > Bump guava to 24.1.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21469) Review of ZooKeeperHiveLockManager
[ https://issues.apache.org/jira/browse/HIVE-21469?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HIVE-21469: -- Status: Open (was: Patch Available) > Review of ZooKeeperHiveLockManager > -- > > Key: HIVE-21469 > URL: https://issues.apache.org/jira/browse/HIVE-21469 > Project: Hive > Issue Type: Improvement > Components: Locking >Affects Versions: 4.0.0, 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HIVE-21469.1.patch > > > A lot of sins in this class to resolve: > {code:java} > @Override > public void setContext(HiveLockManagerCtx ctx) throws LockException { > try { > curatorFramework = CuratorFrameworkSingleton.getInstance(conf); > parent = conf.getVar(HiveConf.ConfVars.HIVE_ZOOKEEPER_NAMESPACE); > try{ > curatorFramework.create().withMode(CreateMode.PERSISTENT).forPath("/" > + parent, new byte[0]); > } catch (Exception e) { > // ignore if the parent already exists > if (!(e instanceof KeeperException) || ((KeeperException)e).code() != > KeeperException.Code.NODEEXISTS) { > LOG.warn("Unexpected ZK exception when creating parent node /" + > parent, e); > } > } > {code} > Every time a new session is created and this {{setContext}} method is called, > it attempts to create the root node. I have seen that, even though the root > node exists, an create node action is written to the ZK logs. Check first if > the node exists before trying to create it. > {code:java} > try { > curatorFramework.delete().forPath(zLock.getPath()); > } catch (InterruptedException ie) { > curatorFramework.delete().forPath(zLock.getPath()); > } > {code} > There has historically been a quite a few bugs regarding leaked locks. The > Driver will signal the session {{Thread}} by performing an interrupt. That > interrupt can happen any time and it can kill a create/delete action within > the ZK framework. We can see one example of workaround for this. If the ZK > action is interrupted, simply do it again. Well, what if it's interrupted > yet again? The lock will be leaked. Also, when the {{InterruptedException}} > is caught in the try block, the thread's interrupted flag is cleared. The > flag is not reset in this code and therefore we lose the fact that this > thread has been interrupted. This flag should be preserved so that other > code paths will know that it's time to exit. > {code:java} > if (tryNum > 1) { > Thread.sleep(sleepTime); > } > unlockPrimitive(hiveLock, parent, curatorFramework); > break; > } catch (Exception e) { > if (tryNum >= numRetriesForUnLock) { > String name = ((ZooKeeperHiveLock)hiveLock).getPath(); > throw new LockException("Node " + name + " can not be deleted after > " + numRetriesForUnLock + " attempts.", > e); > } > } > {code} > ... related... the sleep here may be interrupted, but we still need to delete > the lock (again, for fear of leaking it). This sleep should be > uninterruptible. If we need to get the lock deleted, and there's a problem, > interrupting the sleep will cause the code to eventually exit and locks will > be leaked. > It also requires a bunch more TLC. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21034) Add option to schematool to drop Hive databases
[ https://issues.apache.org/jira/browse/HIVE-21034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796118#comment-16796118 ] Daniel Voros commented on HIVE-21034: - Attached patch #5, that removes the getter+setter methods and adds {{@VisibleForTesting}} to the {{yes}} field. > Add option to schematool to drop Hive databases > --- > > Key: HIVE-21034 > URL: https://issues.apache.org/jira/browse/HIVE-21034 > Project: Hive > Issue Type: Improvement >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Major > Attachments: HIVE-21034.1.patch, HIVE-21034.2.patch, > HIVE-21034.2.patch, HIVE-21034.3.patch, HIVE-21034.4.patch, HIVE-21034.5.patch > > > An option to remove all Hive managed data could be a useful addition to > {{schematool}}. > I propose to introduce a new flag {{-dropAllDatabases}} that would *drop all > databases with CASCADE* to remove all data of managed tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21034) Add option to schematool to drop Hive databases
[ https://issues.apache.org/jira/browse/HIVE-21034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Voros updated HIVE-21034: Attachment: HIVE-21034.5.patch > Add option to schematool to drop Hive databases > --- > > Key: HIVE-21034 > URL: https://issues.apache.org/jira/browse/HIVE-21034 > Project: Hive > Issue Type: Improvement >Reporter: Daniel Voros >Assignee: Daniel Voros >Priority: Major > Attachments: HIVE-21034.1.patch, HIVE-21034.2.patch, > HIVE-21034.2.patch, HIVE-21034.3.patch, HIVE-21034.4.patch, HIVE-21034.5.patch > > > An option to remove all Hive managed data could be a useful addition to > {{schematool}}. > I propose to introduce a new flag {{-dropAllDatabases}} that would *drop all > databases with CASCADE* to remove all data of managed tables. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21460) ACID: Load data followed by a select * query results in incorrect results
[ https://issues.apache.org/jira/browse/HIVE-21460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796113#comment-16796113 ] Victor Corral commented on HIVE-21460: -- http://mail-archives.apache.org/mod_mbox/infra-issues/201903.mbox/raw/.com] > ACID: Load data followed by a select * query results in incorrect results > - > > Key: HIVE-21460 > URL: https://issues.apache.org/jira/browse/HIVE-21460 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 4.0.0, 3.1.1 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Blocker > Attachments: HIVE-21460.1.patch > > > This affects current master as well. Created an orc file such that it spans > multiple stripes and ran a simple select *, and got incorrect row counts > (when comparing with select count(*). The problem seems to be that after > split generation and creating min/max rowId for each row (note that since the > loaded file is not written by Hive ACID, it does not have ROW__ID in the > file; but the ROW__ID is applied on read by discovering min/max bounds which > are used for calculating ROW__ID.rowId for each row of a split), Hive is only > reading the last split. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21460) ACID: Load data followed by a select * query results in incorrect results
[ https://issues.apache.org/jira/browse/HIVE-21460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796112#comment-16796112 ] Hive QA commented on HIVE-21460: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12962943/HIVE-21460.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 15833 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[vector_groupby_reduce] (batchId=61) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udaf_invalid_place] (batchId=99) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/16571/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16571/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16571/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12962943 - PreCommit-HIVE-Build > ACID: Load data followed by a select * query results in incorrect results > - > > Key: HIVE-21460 > URL: https://issues.apache.org/jira/browse/HIVE-21460 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 4.0.0, 3.1.1 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Blocker > Attachments: HIVE-21460.1.patch > > > This affects current master as well. Created an orc file such that it spans > multiple stripes and ran a simple select *, and got incorrect row counts > (when comparing with select count(*). The problem seems to be that after > split generation and creating min/max rowId for each row (note that since the > loaded file is not written by Hive ACID, it does not have ROW__ID in the > file; but the ROW__ID is applied on read by discovering min/max bounds which > are used for calculating ROW__ID.rowId for each row of a split), Hive is only > reading the last split. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21474) Bumping guava version
[ https://issues.apache.org/jira/browse/HIVE-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21474: -- Status: Patch Available (was: Open) > Bumping guava version > - > > Key: HIVE-21474 > URL: https://issues.apache.org/jira/browse/HIVE-21474 > Project: Hive > Issue Type: Task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-21474.patch > > > Bump guava to 24.1.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21474) Bumping guava version
[ https://issues.apache.org/jira/browse/HIVE-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796076#comment-16796076 ] Peter Vary commented on HIVE-21474: --- [~kgyrtkirk]: The only somewhat meaningful change is in BasicStats class. Could you please review? Thanks, Peter > Bumping guava version > - > > Key: HIVE-21474 > URL: https://issues.apache.org/jira/browse/HIVE-21474 > Project: Hive > Issue Type: Task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-21474.patch > > > Bump guava to 24.1.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21474) Bumping guava version
[ https://issues.apache.org/jira/browse/HIVE-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21474: -- Attachment: HIVE-21474.patch > Bumping guava version > - > > Key: HIVE-21474 > URL: https://issues.apache.org/jira/browse/HIVE-21474 > Project: Hive > Issue Type: Task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-21474.patch > > > Bump guava to 24.1.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21473) Bumping jackson version
[ https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796065#comment-16796065 ] Marta Kuczora commented on HIVE-21473: -- +1 pending tests > Bumping jackson version > --- > > Key: HIVE-21473 > URL: https://issues.apache.org/jira/browse/HIVE-21473 > Project: Hive > Issue Type: Task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-21473.patch > > > Bump jackson version to 2.9.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21460) ACID: Load data followed by a select * query results in incorrect results
[ https://issues.apache.org/jira/browse/HIVE-21460?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796069#comment-16796069 ] Hive QA commented on HIVE-21460: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 15s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16571/dev-support/hive-personality.sh | | git revision | master / 36bd89d | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-16571/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > ACID: Load data followed by a select * query results in incorrect results > - > > Key: HIVE-21460 > URL: https://issues.apache.org/jira/browse/HIVE-21460 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 4.0.0, 3.1.1 >Reporter: Vaibhav Gumashta >Assignee: Vaibhav Gumashta >Priority: Blocker > Attachments: HIVE-21460.1.patch > > > This affects current master as well. Created an orc file such that it spans > multiple stripes and ran a simple select *, and got incorrect row counts > (when comparing with select count(*). The problem seems to be that after > split generation and creating min/max rowId for each row (note that since the > loaded file is not written by Hive ACID, it does not have ROW__ID in the > file; but the ROW__ID is applied on read by discovering min/max bounds which > are used for calculating ROW__ID.rowId for each row of a split), Hive is only > reading the last split. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21473) Bumping jackson version
[ https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796053#comment-16796053 ] Peter Vary commented on HIVE-21473: --- [~kuczoram]: Could you please review? Thanks, Peter > Bumping jackson version > --- > > Key: HIVE-21473 > URL: https://issues.apache.org/jira/browse/HIVE-21473 > Project: Hive > Issue Type: Task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-21473.patch > > > Bump jackson version to 2.9.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21474) Bumping guava version
[ https://issues.apache.org/jira/browse/HIVE-21474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary reassigned HIVE-21474: - > Bumping guava version > - > > Key: HIVE-21474 > URL: https://issues.apache.org/jira/browse/HIVE-21474 > Project: Hive > Issue Type: Task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > > Bump guava to 24.1.1 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21473) Bumping jackson version
[ https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21473: -- Status: Patch Available (was: Open) > Bumping jackson version > --- > > Key: HIVE-21473 > URL: https://issues.apache.org/jira/browse/HIVE-21473 > Project: Hive > Issue Type: Task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-21473.patch > > > Bump jackson version to 2.9.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21473) Bumping jackson version
[ https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary updated HIVE-21473: -- Attachment: HIVE-21473.patch > Bumping jackson version > --- > > Key: HIVE-21473 > URL: https://issues.apache.org/jira/browse/HIVE-21473 > Project: Hive > Issue Type: Task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-21473.patch > > > Bump jackson version to 2.9.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21473) Bumping jackson version
[ https://issues.apache.org/jira/browse/HIVE-21473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Vary reassigned HIVE-21473: - > Bumping jackson version > --- > > Key: HIVE-21473 > URL: https://issues.apache.org/jira/browse/HIVE-21473 > Project: Hive > Issue Type: Task >Reporter: Peter Vary >Assignee: Peter Vary >Priority: Major > > Bump jackson version to 2.9.8 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21430) INSERT into a dynamically partitioned table with hive.stats.autogather = false throws a MetaException
[ https://issues.apache.org/jira/browse/HIVE-21430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796040#comment-16796040 ] Hive QA commented on HIVE-21430: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12962926/HIVE-21430.03.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15834 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udaf_invalid_place] (batchId=99) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/16570/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16570/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16570/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12962926 - PreCommit-HIVE-Build > INSERT into a dynamically partitioned table with hive.stats.autogather = > false throws a MetaException > - > > Key: HIVE-21430 > URL: https://issues.apache.org/jira/browse/HIVE-21430 > Project: Hive > Issue Type: Bug >Reporter: Ashutosh Bapat >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21430.01.patch, HIVE-21430.02.patch, > HIVE-21430.03.patch, metaexception_repro.patch, > org.apache.hadoop.hive.ql.stats.TestStatsUpdaterThread-output.txt > > Original Estimate: 48h > Time Spent: 50m > Remaining Estimate: 47h 10m > > When the test TestStatsUpdaterThread#testTxnDynamicPartitions added in the > attached patch is run it throws exception (full logs attached.) > org.apache.hadoop.hive.metastore.api.MetaException: Cannot change stats state > for a transactional table default.simple_stats without providing the > transactional write state for verification (new write ID 5, valid write IDs > null; current state \{"BASIC_STATS":"true","COLUMN_STATS":{"s":"true"}}; new > state null > at > org.apache.hadoop.hive.metastore.ObjectStore.alterPartitionNoTxn(ObjectStore.java:4328) > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21430) INSERT into a dynamically partitioned table with hive.stats.autogather = false throws a MetaException
[ https://issues.apache.org/jira/browse/HIVE-21430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16796008#comment-16796008 ] Hive QA commented on HIVE-21430: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 40s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 4m 8s{color} | {color:blue} ql in master has 2255 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 14s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 25m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16570/dev-support/hive-personality.sh | | git revision | master / 36bd89d | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-16570/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > INSERT into a dynamically partitioned table with hive.stats.autogather = > false throws a MetaException > - > > Key: HIVE-21430 > URL: https://issues.apache.org/jira/browse/HIVE-21430 > Project: Hive > Issue Type: Bug >Reporter: Ashutosh Bapat >Assignee: Ashutosh Bapat >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21430.01.patch, HIVE-21430.02.patch, > HIVE-21430.03.patch, metaexception_repro.patch, > org.apache.hadoop.hive.ql.stats.TestStatsUpdaterThread-output.txt > > Original Estimate: 48h > Time Spent: 50m > Remaining Estimate: 47h 10m > > When the test TestStatsUpdaterThread#testTxnDynamicPartitions added in the > attached patch is run it throws exception (full logs attached.) > org.apache.hadoop.hive.metastore.api.MetaException: Cannot change stats state > for a transactional table default.simple_stats without providing the > transactional write state for verification (new write ID 5, valid write IDs > null; current state \{"BASIC_STATS":"true","COLUMN_STATS":{"s":"true"}}; new > state null > at > org.apache.hadoop.hive.metastore.ObjectStore.alterPartitionNoTxn(ObjectStore.java:4328) > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21446) Hive Server going OOM during hive external table replications
[ https://issues.apache.org/jira/browse/HIVE-21446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795990#comment-16795990 ] Sankar Hariappan commented on HIVE-21446: - +1 for 03.patch, pending tests > Hive Server going OOM during hive external table replications > - > > Key: HIVE-21446 > URL: https://issues.apache.org/jira/browse/HIVE-21446 > Project: Hive > Issue Type: Bug > Components: repl >Affects Versions: 4.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: pull-request-available > Fix For: 4.0.0 > > Attachments: HIVE-21446.01.patch, HIVE-21446.02.patch, > HIVE-21446.03.patch > > Time Spent: 2h 40m > Remaining Estimate: 0h > > The file system objects opened using proxy users are not closed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21456) Hive Metastore HTTP Thrift
[ https://issues.apache.org/jira/browse/HIVE-21456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795987#comment-16795987 ] Hive QA commented on HIVE-21456: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12962912/HIVE-21456.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 15833 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udaf_invalid_place] (batchId=99) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/16569/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16569/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16569/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12962912 - PreCommit-HIVE-Build > Hive Metastore HTTP Thrift > -- > > Key: HIVE-21456 > URL: https://issues.apache.org/jira/browse/HIVE-21456 > Project: Hive > Issue Type: New Feature > Components: Metastore, Standalone Metastore >Reporter: Amit Khanna >Assignee: Amit Khanna >Priority: Major > Attachments: HIVE-21456.2.patch, HIVE-21456.3.patch, HIVE-21456.patch > > > Hive Metastore currently doesn't have support for HTTP transport because of > which it is not possible to access it via Knox. Adding support for Thrift > over HTTP transport will allow the clients to access via Knox -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21456) Hive Metastore HTTP Thrift
[ https://issues.apache.org/jira/browse/HIVE-21456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795962#comment-16795962 ] Hive QA commented on HIVE-21456: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 6s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 30s{color} | {color:green} master passed {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 2m 30s{color} | {color:blue} standalone-metastore/metastore-common in master has 29 extant Findbugs warnings. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 1m 17s{color} | {color:blue} standalone-metastore/metastore-server in master has 179 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 2 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 13s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 24m 24s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-16569/dev-support/hive-personality.sh | | git revision | master / 36bd89d | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-16569/yetus/whitespace-eol.txt | | modules | C: standalone-metastore/metastore-common standalone-metastore/metastore-server U: standalone-metastore | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-16569/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Hive Metastore HTTP Thrift > -- > > Key: HIVE-21456 > URL: https://issues.apache.org/jira/browse/HIVE-21456 > Project: Hive > Issue Type: New Feature > Components: Metastore, Standalone Metastore >Reporter: Amit Khanna >Assignee: Amit Khanna >Priority: Major > Attachments: HIVE-21456.2.patch, HIVE-21456.3.patch, HIVE-21456.patch > > > Hive Metastore currently doesn't have support for HTTP transport because of > which it is not possible to access it via Knox. Adding support for Thrift > over HTTP transport will allow the clients to access via Knox -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21283) Create Synonym mid for substr, position for locate
[ https://issues.apache.org/jira/browse/HIVE-21283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mani M updated HIVE-21283: -- Status: Patch Available (was: In Progress) Resubmitting the same patch again to avoid the failure due to falky test cases > Create Synonym mid for substr, position for locate > > > Key: HIVE-21283 > URL: https://issues.apache.org/jira/browse/HIVE-21283 > Project: Hive > Issue Type: New Feature >Reporter: Mani M >Assignee: Mani M >Priority: Minor > Labels: UDF, pull-request-available, todoc4.0 > Fix For: 4.0.0 > > Attachments: HIVE.21283.03.PATCH, HIVE.21283.04.PATCH, > HIVE.21283.05.PATCH, HIVE.21283.06.PATCH, HIVE.21283.07.PATCH, > HIVE.21283.08.PATCH, HIVE.21283.09.PATCH, HIVE.21283.2.PATCH, > HIVE.21283.PATCH, image-2019-03-16-21-31-15-541.png, > image-2019-03-16-21-33-18-898.png > > Time Spent: 2.5h > Remaining Estimate: 0h > > Create new synonym for the existing function > > Mid for substr > postiion for locate -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-21290) Restore historical way of handling timestamps in Parquet while keeping the new semantics at the same time
[ https://issues.apache.org/jira/browse/HIVE-21290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Karen Coppage reassigned HIVE-21290: Assignee: Karen Coppage > Restore historical way of handling timestamps in Parquet while keeping the > new semantics at the same time > - > > Key: HIVE-21290 > URL: https://issues.apache.org/jira/browse/HIVE-21290 > Project: Hive > Issue Type: Sub-task >Reporter: Zoltan Ivanfi >Assignee: Karen Coppage >Priority: Major > > This sub-task is for implementing the Parquet-specific parts of the following > plan: > h1. Problem > Historically, the semantics of the TIMESTAMP type in Hive depended on the > file format. Timestamps in Avro, Parquet and RCFiles with a binary SerDe had > _Instant_ semantics, while timestamps in ORC, textfiles and RCFiles with a > text SerDe had _LocalDateTime_ semantics. > The Hive community wanted to get rid of this inconsistency and have > _LocalDateTime_ semantics in Avro, Parquet and RCFiles with a binary SerDe as > well. *Hive 3.1 turned off normalization to UTC* to achieve this. While this > leads to the desired new semantics, it also leads to incorrect results when > new Hive versions read timestamps written by old Hive versions or when old > Hive versions or any other component not aware of this change (including > legacy Impala and Spark versions) read timestamps written by new Hive > versions. > h1. Solution > To work around this issue, Hive *should restore the practice of normalizing > to UTC* when writing timestamps to Avro, Parquet and RCFiles with a binary > SerDe. In itself, this would restore the historical _Instant_ semantics, > which is undesirable. In order to achieve the desired _LocalDateTime_ > semantics in spite of normalizing to UTC, newer Hive versions should record > the session-local local time zone in the file metadata fields serving > arbitrary key-value storage purposes. > When reading back files with this time zone metadata, newer Hive versions (or > any other new component aware of this extra metadata) can achieve > _LocalDateTime_ semantics by *converting from UTC to the saved time zone > (instead of to the local time zone)*. Legacy components that are unaware of > the new metadata can read the files without any problem and the timestamps > will show the historical Instant behaviour to them. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21304) Show Bucketing version for ReduceSinkOp in explain extended plan
[ https://issues.apache.org/jira/browse/HIVE-21304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zoltan Haindrich updated HIVE-21304: Attachment: HIVE-21304.01.patch > Show Bucketing version for ReduceSinkOp in explain extended plan > > > Key: HIVE-21304 > URL: https://issues.apache.org/jira/browse/HIVE-21304 > Project: Hive > Issue Type: Bug >Reporter: Deepak Jaiswal >Assignee: Zoltan Haindrich >Priority: Major > Attachments: HIVE-21304.01.patch > > > Show Bucketing version for ReduceSinkOp in explain extended plan. > This helps identify what hashing algorithm is being used by by ReduceSinkOp. > > cc [~vgarg] -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-20934) ACID: Query based compactor for minor compaction
[ https://issues.apache.org/jira/browse/HIVE-20934?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aron Hamvas reassigned HIVE-20934: -- Assignee: Aron Hamvas (was: Vaibhav Gumashta) > ACID: Query based compactor for minor compaction > > > Key: HIVE-20934 > URL: https://issues.apache.org/jira/browse/HIVE-20934 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.1.1 >Reporter: Vaibhav Gumashta >Assignee: Aron Hamvas >Priority: Major > > Follow up of HIVE-20699. This is to enable running minor compactions as a > HiveQL query -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-13479) Relax sorting requirement in ACID tables
[ https://issues.apache.org/jira/browse/HIVE-13479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795933#comment-16795933 ] Abhishek Somani commented on HIVE-13479: What I meant is now that ACID v2 has been implemented, do we plan to work on relaxing the sorting requirement? As far as I know, we still enforce that the rows be sorted on the acid columns(row id), and this is done so that the reader can sort-merge the delete events with the insert events while reading. Isn't that right? If so, it seems the only way to have data sorted on another column specified by the user seems to be to initially insert the data with ordering on that column, so that the data is sorted BOTH on the acid columns as well as user specified column. If however we were able to relax the requirement that data HAS to be sorted on the acid columns, we could utilize something like compaction to sort the data on user desired columns in the background. Theoretically one could do such sorting in compaction even today, but if the sorting requirement is not relaxed, we will need to sort both on row ids and user-column, for which one would need the compaction to behave as an insert overwrite and generate new row ids so that the data is sorted on both the (new)row id columns as well as the user specified column, which would be good to avoid. Have I understood this correct? > Relax sorting requirement in ACID tables > > > Key: HIVE-13479 > URL: https://issues.apache.org/jira/browse/HIVE-13479 > Project: Hive > Issue Type: New Feature > Components: Transactions >Affects Versions: 1.2.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Original Estimate: 160h > Remaining Estimate: 160h > > Currently ACID tables require data to be sorted according to internal primary > key. This is that base + delta files can be efficiently sort/merged to > produce the snapshot for current transaction. > This prevents the user to make the table sorted based on any other criteria > which can be useful. One example is using dynamic partition insert (which > also occurs for update/delete SQL). This may create lots of writers > (buckets*partitions) and tax cluster resources. > The usual solution is hive.optimize.sort.dynamic.partition=true which won't > be honored for ACID tables. > We could rely on hash table based algorithm to merge delta files and then not > require any particular sort on Acid tables. One way to do that is to treat > each update event as an Insert (new internal PK) + delete (old PK). Delete > events are very small since they just need to contain PKs. So the hash table > would just need to contain Delete events and be reasonably memory efficient. > This is a significant amount of work but worth doing. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-20580) OrcInputFormat.isOriginal() should not rely on hive.acid.key.index
[ https://issues.apache.org/jira/browse/HIVE-20580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795934#comment-16795934 ] Peter Vary commented on HIVE-20580: --- [~ekoifman]: The {{isOriginal(Footer footer)}} is called from {{org.apache.hadoop.hive.llap.io.metadata.OrcFileMetadata}} constructor, and the value set by the constructor is only used to implement the {{org.apache.orc.FileMetadata.isOriginalFormat()}} interface method which in turn is not called anywhere in the Hive code base. Since this is an external interface I rather not leave the method unimplemented, even if it is not called anywhere at the moment. On the other hand you confirmed that I correctly understood the meaning of the isOriginal, so I think it is ok to fix the output of the other method to match the same as well. Thanks for the help [~ekoifman]!!! Really appreciate it! Peter > OrcInputFormat.isOriginal() should not rely on hive.acid.key.index > -- > > Key: HIVE-20580 > URL: https://issues.apache.org/jira/browse/HIVE-20580 > Project: Hive > Issue Type: Improvement > Components: Transactions >Affects Versions: 3.1.0 >Reporter: Eugene Koifman >Assignee: Peter Vary >Priority: Major > Attachments: HIVE-20580.2.patch, HIVE-20580.3.patch, > HIVE-20580.4.patch, HIVE-20580.5.patch, HIVE-20580.6.patch, HIVE-20580.patch > > > {{org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.isOriginal()}} is checking > for presence of {{hive.acid.key.index}} in the footer. This is only created > when the file is written by {{OrcRecordUpdater}}. It should instead check > for presence of Acid metadata columns so that a file can be produced by > something other than {{OrcRecordUpater}}. > Also, {{hive.acid.key.index}} counts number of different type of events which > is not really useful for Acid V2 (as of Hive 3) since each file only has 1 > type of event. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21283) Create Synonym mid for substr, position for locate
[ https://issues.apache.org/jira/browse/HIVE-21283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mani M updated HIVE-21283: -- Attachment: HIVE.21283.09.PATCH > Create Synonym mid for substr, position for locate > > > Key: HIVE-21283 > URL: https://issues.apache.org/jira/browse/HIVE-21283 > Project: Hive > Issue Type: New Feature >Reporter: Mani M >Assignee: Mani M >Priority: Minor > Labels: UDF, pull-request-available, todoc4.0 > Fix For: 4.0.0 > > Attachments: HIVE.21283.03.PATCH, HIVE.21283.04.PATCH, > HIVE.21283.05.PATCH, HIVE.21283.06.PATCH, HIVE.21283.07.PATCH, > HIVE.21283.08.PATCH, HIVE.21283.09.PATCH, HIVE.21283.2.PATCH, > HIVE.21283.PATCH, image-2019-03-16-21-31-15-541.png, > image-2019-03-16-21-33-18-898.png > > Time Spent: 2.5h > Remaining Estimate: 0h > > Create new synonym for the existing function > > Mid for substr > postiion for locate -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-21283) Create Synonym mid for substr, position for locate
[ https://issues.apache.org/jira/browse/HIVE-21283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mani M updated HIVE-21283: -- Status: In Progress (was: Patch Available) > Create Synonym mid for substr, position for locate > > > Key: HIVE-21283 > URL: https://issues.apache.org/jira/browse/HIVE-21283 > Project: Hive > Issue Type: New Feature >Reporter: Mani M >Assignee: Mani M >Priority: Minor > Labels: UDF, pull-request-available, todoc4.0 > Fix For: 4.0.0 > > Attachments: HIVE.21283.03.PATCH, HIVE.21283.04.PATCH, > HIVE.21283.05.PATCH, HIVE.21283.06.PATCH, HIVE.21283.07.PATCH, > HIVE.21283.08.PATCH, HIVE.21283.09.PATCH, HIVE.21283.2.PATCH, > HIVE.21283.PATCH, image-2019-03-16-21-31-15-541.png, > image-2019-03-16-21-33-18-898.png > > Time Spent: 2.5h > Remaining Estimate: 0h > > Create new synonym for the existing function > > Mid for substr > postiion for locate -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-21468) Case sensitivity in identifier names for JDBC storage handler
[ https://issues.apache.org/jira/browse/HIVE-21468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16795930#comment-16795930 ] Hive QA commented on HIVE-21468: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12962902/HIVE-21468.01.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 15834 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[external_jdbc_negative] (batchId=100) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[udaf_invalid_place] (batchId=99) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/16568/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/16568/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-16568/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12962902 - PreCommit-HIVE-Build > Case sensitivity in identifier names for JDBC storage handler > - > > Key: HIVE-21468 > URL: https://issues.apache.org/jira/browse/HIVE-21468 > Project: Hive > Issue Type: Bug > Components: CBO >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-21468.01.patch, HIVE-21468.patch > > > Currently, when Calcite generates the SQL query for the JDBC storage handler, > it will ignore capitalization for the identifiers names, which can lead to > errors at execution time (though the query is properly generated). -- This message was sent by Atlassian JIRA (v7.6.3#76005)