[jira] [Commented] (HIVE-17481) LLAP workload management
[ https://issues.apache.org/jira/browse/HIVE-17481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445395#comment-16445395 ] Prasanth Jayachandran commented on HIVE-17481: -- [~thai.bui] with these settings are you still seeing issues with tasks not getting moved to a different pool or small query not completing sooner (clogged by resources from big query)? > LLAP workload management > > > Key: HIVE-17481 > URL: https://issues.apache.org/jira/browse/HIVE-17481 > Project: Hive > Issue Type: New Feature >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Fix For: 3.0.0 > > Attachments: Workload management design doc.pdf > > > This effort is intended to improve various aspects of cluster sharing for > LLAP. Some of these are applicable to non-LLAP queries and may later be > extended to all queries. Administrators will be able to specify and apply > policies for workload management ("resource plans") that apply to the entire > cluster, with only one resource plan being active at a time. The policies > will be created and modified using new Hive DDL statements. > The policies will cover: > * Dividing the cluster into a set of (optionally, nested) query pools that > are each allocated a fraction of the cluster, a set query parallelism, > resource sharing policy between queries, and potentially others like > priority, etc. > * Mapping the incoming queries into pools based on the query user, groups, > explicit configuration, etc. > * Specifying rules that perform actions on queries based on counter values > (e.g. killing or moving queries). > One would also be able to switch policies on a live cluster without (usually) > affecting running queries, including e.g. to change policies for daytime and > nighttime usage patterns, and other similar scenarios. The switches would be > safe and atomic; versioning may eventually be supported. > Some implementation details: > * WM will only be supported in HS2 (for obvious reasons). > * All LLAP query AMs will run in "interactive" YARN queue and will be > fungible between Hive pools. > * We will use the concept of "guaranteed tasks" (also known as ducks) to > enforce cluster allocation without a central scheduler and without > compromising throughput. Guaranteed tasks preempt other (speculative) tasks > and are distributed from HS2 to AMs, and from AMs to tasks, in accordance > with percentage allocations in the policy. Each "duck" corresponds to a CPU > resource on the cluster. The implementation will be isolated so as to allow > different ones later. > * In future, we may consider improved task placement and late binding, > similar to the ones described in Sparrow paper, to work around potential > hotspots/etc. that are not avoided with the decentralized scheme. > * Only one HS2 will initially be supported to avoid split-brain workload > management. We will also implement (in a tangential set of work items) > active-passive HS2 recovery. Eventually, we intend to switch to full > active-active HS2 configuration with shared WM and Tez session pool (unlike > the current case with 2 separate session pools). -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19209) Streaming ingest record writers should accept input stream
[ https://issues.apache.org/jira/browse/HIVE-19209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445392#comment-16445392 ] Prasanth Jayachandran commented on HIVE-19209: -- This patch depends on HIVE-19211. Not making it patch available until HIVE-19211. Patch is ready for review though. [~ashutoshc] can you please take a look? > Streaming ingest record writers should accept input stream > -- > > Key: HIVE-19209 > URL: https://issues.apache.org/jira/browse/HIVE-19209 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19209.1.patch > > > Record writers in streaming ingest currently accepts byte[]. Provide an > option for clients to pass in input stream directly from which byte[] for > record can be constructed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19209) Streaming ingest record writers should accept input stream
[ https://issues.apache.org/jira/browse/HIVE-19209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19209: - Attachment: HIVE-19209.1.patch > Streaming ingest record writers should accept input stream > -- > > Key: HIVE-19209 > URL: https://issues.apache.org/jira/browse/HIVE-19209 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19209.1.patch > > > Record writers in streaming ingest currently accepts byte[]. Provide an > option for clients to pass in input stream directly from which byte[] for > record can be constructed. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18423) Hive should support usage of external tables using jdbc
[ https://issues.apache.org/jira/browse/HIVE-18423?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445376#comment-16445376 ] Hive QA commented on HIVE-18423: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 47s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 23s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 47s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 36s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} jdbc-handler: The patch generated 8 new + 47 unchanged - 1 fixed = 55 total (was 48) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 46s{color} | {color:red} ql: The patch generated 132 new + 377 unchanged - 0 fixed = 509 total (was 377) {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 89 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 15s{color} | {color:red} The patch generated 1 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 21m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10361/dev-support/hive-personality.sh | | git revision | master / 6c4adc9 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10361/yetus/diff-checkstyle-jdbc-handler.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10361/yetus/diff-checkstyle-ql.txt | | whitespace | http://104.198.109.242/logs//PreCommit-HIVE-Build-10361/yetus/whitespace-eol.txt | | asflicense | http://104.198.109.242/logs//PreCommit-HIVE-Build-10361/yetus/patch-asflicense-problems.txt | | modules | C: common jdbc-handler ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10361/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Hive should support usage of external tables using jdbc > --- > > Key: HIVE-18423 > URL: https://issues.apache.org/jira/browse/HIVE-18423 > Project: Hive > Issue Type: Improvement >Reporter: Jonathan Doron >Assignee: Jonathan Doron >Priority: Major > Labels: pull-request-available > Fix For: 3.1.0 > > Attachments: HIVE-18423.1.patch, HIVE-18423.2.patch > > > Hive should support the usage of external jdbc tables(and not only external > tables that hold queries), so an Hive user would be able to use the external > table as an hive internal table. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19248) REPL LOAD doesn't throw error if file copy fails.
[ https://issues.apache.org/jira/browse/HIVE-19248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-19248: Description: Hive replication uses Hadoop distcp to copy files from primary to replica warehouse. If the HDFS block size is different across clusters, it cause file copy failures. {code:java} 2018-04-09 14:32:06,690 ERROR [main] org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0 java.io.IOException: File copy failed: hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 --> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0 at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:299) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:266) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:52) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164) Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0 at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101) at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:296) ... 10 more Caused by: java.io.IOException: Check-sum mismatch between hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 and hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/.distcp.tmp.attempt_1522833620762_4416_m_00_0. Source and target differ in block-size. Use -pb to preserve block-sizes during copy. Alternatively, skip checksum-checks altogether, using -skipCrc. (NOTE: By skipping checksums, one runs the risk of masking data-corruption during file-transfer.) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareCheckSums(RetriableFileCopyCommand.java:212) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:130) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99) at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) ... 11 more {code} But REPL LOAD returns success even if distcp jobs failed. CopyUtils.doCopyRetry doesn't throw error if copy failed even after maximum attempts. So, need to perform 2 things. # If copy of multiple files fail for some reason, need to check if any files completely copied by verifying the checksum and file size and skip those from retry. # If source path is moved to CM path, then delete the incorrectly copied files. # If copy fails for maximum attempt, then throw error. was: Hive replication uses Hadoop distcp to copy files from primary to replica warehouse. If the HDFS block size is different across clusters, it cause file copy failures. {code:java} 2018-04-09 14:32:06,690 ERROR [main] org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0 java.io.IOException: File copy failed: hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 --> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0 at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:299) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:266) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:52) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java
[jira] [Updated] (HIVE-19248) REPL LOAD doesn't throw error if file copy fails.
[ https://issues.apache.org/jira/browse/HIVE-19248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-19248: Summary: REPL LOAD doesn't throw error if file copy fails. (was: Hive replication cause file copy failures if HDFS block size differs across clusters) > REPL LOAD doesn't throw error if file copy fails. > - > > Key: HIVE-19248 > URL: https://issues.apache.org/jira/browse/HIVE-19248 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > > Hive replication uses Hadoop distcp to copy files from primary to replica > warehouse. If the HDFS block size is different across clusters, it cause file > copy failures. > {code:java} > 2018-04-09 14:32:06,690 ERROR [main] > org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying > hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to > hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0 > java.io.IOException: File copy failed: > hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 > --> > hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0 > at > org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:299) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:266) > at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:52) > at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) > at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) > at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) > at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164) > Caused by: java.io.IOException: Couldn't run retriable-command: Copying > hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to > hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0 > at > org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101) > at > org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:296) > ... 10 more > Caused by: java.io.IOException: Check-sum mismatch between > hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 > and > hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/.distcp.tmp.attempt_1522833620762_4416_m_00_0. > Source and target differ in block-size. Use -pb to preserve block-sizes > during copy. Alternatively, skip checksum-checks altogether, using -skipCrc. > (NOTE: By skipping checksums, one runs the risk of masking data-corruption > during file-transfer.) > at > org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareCheckSums(RetriableFileCopyCommand.java:212) > at > org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:130) > at > org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99) > at > org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) > ... 11 more > {code} > Also, REPL LOAD returns success even if distcp jobs failed. > So, need to perform 2 things. > # Set proper options for distcp to preserve the block size and skip CRC > check. Use options such as *-pugpbx, -update.* > # If copy of multiple files fail for some reason, need to check if any files > completely copied by verifying the checksum and file size and skip those from > retry. > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19137) orcfiledump doesn't print hive.acid.version value
[ https://issues.apache.org/jira/browse/HIVE-19137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445361#comment-16445361 ] Hive QA commented on HIVE-19137: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12919853/HIVE-19137.04.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 35 failed/errored test(s), 14279 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_nullscan] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_table_stats] (batchId=54) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_4] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation] (batchId=37) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_transactional] (batchId=16) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[row__id] (batchId=80) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] (batchId=54) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[acid_bucket_pruning] (batchId=150) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[acid_vectorization_original] (batchId=173) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[dynpart_sort_optimization_acid] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[enforce_constraint_notnull] (batchId=158) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_4] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[materialized_view_create_rewrite_5] (batchId=154) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=171) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[acid_vectorization_original_tez] (batchId=106) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=225) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.testCookieNegative (batchId=254) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10360/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10360/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10360/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 35 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12919853 - PreCommit-HIVE-Build > orcfiledump doesn't print hive.acid.version value > --
[jira] [Updated] (HIVE-19222) TestNegativeCliDriver tests are failing due to "java.lang.OutOfMemoryError: GC overhead limit exceeded"
[ https://issues.apache.org/jira/browse/HIVE-19222?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19222: --- Fix Version/s: 3.0.0 > TestNegativeCliDriver tests are failing due to "java.lang.OutOfMemoryError: > GC overhead limit exceeded" > --- > > Key: HIVE-19222 > URL: https://issues.apache.org/jira/browse/HIVE-19222 > Project: Hive > Issue Type: Sub-task >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19222.1.patch > > > TestNegativeCliDriver tests are failing with OOM recently. Not sure why. I > will try to increase the memory to test out. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19222) TestNegativeCliDriver tests are failing due to "java.lang.OutOfMemoryError: GC overhead limit exceeded"
[ https://issues.apache.org/jira/browse/HIVE-19222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445357#comment-16445357 ] Vineet Garg commented on HIVE-19222: Pushed to branch-3 as well > TestNegativeCliDriver tests are failing due to "java.lang.OutOfMemoryError: > GC overhead limit exceeded" > --- > > Key: HIVE-19222 > URL: https://issues.apache.org/jira/browse/HIVE-19222 > Project: Hive > Issue Type: Sub-task >Reporter: Aihua Xu >Assignee: Aihua Xu >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19222.1.patch > > > TestNegativeCliDriver tests are failing with OOM recently. Not sure why. I > will try to increase the memory to test out. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-19214) High throughput ingest ORC format
[ https://issues.apache.org/jira/browse/HIVE-19214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445354#comment-16445354 ] Prasanth Jayachandran edited comment on HIVE-19214 at 4/20/18 5:54 AM: --- Added the changes under config. PPD changes when no stripe stats are available for delta files. LLAP IO changes to deal with no row indexes and hence no positional information (treat this as entire column read) [~gopalv] can you please take another look? For streaming, will enable it by default in the new endpoint API. was (Author: prasanth_j): Added the changes under config. [~gopalv] can you please take another look? For streaming, will enable it by default in the new endpoint API. > High throughput ingest ORC format > - > > Key: HIVE-19214 > URL: https://issues.apache.org/jira/browse/HIVE-19214 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Gopal V >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19214.1.patch, HIVE-19214.2.patch, > HIVE-19214.3.patch > > > Create delta files with all ORC overhead disabled (no index, no compression, > no dictionary). Compactor will recreate the orc files with index, compression > and dictionary encoding. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19214) High throughput ingest ORC format
[ https://issues.apache.org/jira/browse/HIVE-19214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445354#comment-16445354 ] Prasanth Jayachandran commented on HIVE-19214: -- Added the changes under config. [~gopalv] can you please take another look? For streaming, will enable it by default in the new endpoint API. > High throughput ingest ORC format > - > > Key: HIVE-19214 > URL: https://issues.apache.org/jira/browse/HIVE-19214 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Gopal V >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19214.1.patch, HIVE-19214.2.patch, > HIVE-19214.3.patch > > > Create delta files with all ORC overhead disabled (no index, no compression, > no dictionary). Compactor will recreate the orc files with index, compression > and dictionary encoding. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19248) Hive replication cause file copy failures if HDFS block size differs across clusters
[ https://issues.apache.org/jira/browse/HIVE-19248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-19248: Description: Hive replication uses Hadoop distcp to copy files from primary to replica warehouse. If the HDFS block size is different across clusters, it cause file copy failures. {code:java} 2018-04-09 14:32:06,690 ERROR [main] org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0 java.io.IOException: File copy failed: hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 --> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0 at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:299) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:266) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:52) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164) Caused by: java.io.IOException: Couldn't run retriable-command: Copying hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0 at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:101) at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:296) ... 10 more Caused by: java.io.IOException: Check-sum mismatch between hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 and hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/.distcp.tmp.attempt_1522833620762_4416_m_00_0. Source and target differ in block-size. Use -pb to preserve block-sizes during copy. Alternatively, skip checksum-checks altogether, using -skipCrc. (NOTE: By skipping checksums, one runs the risk of masking data-corruption during file-transfer.) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareCheckSums(RetriableFileCopyCommand.java:212) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:130) at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:99) at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87) ... 11 more {code} Also, REPL LOAD returns success even if distcp jobs failed. So, need to perform 2 things. # Set proper options for distcp to preserve the block size and skip CRC check. Use options such as *-pugpbx, -update.* # If copy of multiple files fail for some reason, need to check if any files completely copied by verifying the checksum and file size and skip those from retry. was: Hive replication uses Hadoop distcp to copy files from primary to replica warehouse. If the HDFS block size is different across clusters, it cause file copy failures. {code} 2018-04-09 14:32:06,690 ERROR [main] org.apache.hadoop.tools.mapred.CopyMapper: Failure in copying hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 to hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0 java.io.IOException: File copy failed: hdfs://chelsea/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/000259_0 --> hdfs://marilyn/apps/hive/warehouse/tpch_flat_orc_1000.db/customer/.hive-staging_hive_2018-04-09_14-30-45_723_7153496419225102220-2/-ext-10001/000259_0 at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:299) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:266) at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:52) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(S
[jira] [Updated] (HIVE-19214) High throughput ingest ORC format
[ https://issues.apache.org/jira/browse/HIVE-19214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-19214: - Attachment: HIVE-19214.3.patch > High throughput ingest ORC format > - > > Key: HIVE-19214 > URL: https://issues.apache.org/jira/browse/HIVE-19214 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Gopal V >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19214.1.patch, HIVE-19214.2.patch, > HIVE-19214.3.patch > > > Create delta files with all ORC overhead disabled (no index, no compression, > no dictionary). Compactor will recreate the orc files with index, compression > and dictionary encoding. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19130) NPE is thrown when REPL LOAD applied drop partition event.
[ https://issues.apache.org/jira/browse/HIVE-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-19130: Fix Version/s: (was: 3.1.0) > NPE is thrown when REPL LOAD applied drop partition event. > -- > > Key: HIVE-19130 > URL: https://issues.apache.org/jira/browse/HIVE-19130 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication, pull-request-available > Fix For: 3.0.0 > > Attachments: HIVE-19130.01.patch > > > During incremental replication, if we split the events batch as follows, then > the REPL LOAD on second batch throws NPE. > Batch-1: CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> DROP_PARTITION (t1.p1) > Batch-2: DROP_TABLE(t1) -> CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> > DROP_PARTITION (t1.p1) > {code} > 2018-04-05 16:20:36,531 ERROR [HiveServer2-Background-Pool: Thread-107044]: > metadata.Hive (Hive.java:getTable(1219)) - Table catalog_sales_new not found: > new5_tpcds_real_bin_partitioned_orc_1000.catalog_sales_new table not found > 2018-04-05 16:20:36,538 ERROR [HiveServer2-Background-Pool: Thread-107044]: > exec.DDLTask (DDLTask.java:failed(540)) - > org.apache.hadoop.hive.ql.metadata.HiveException > at > org.apache.hadoop.hive.ql.exec.DDLTask.dropPartitions(DDLTask.java:4016) > at > org.apache.hadoop.hive.ql.exec.DDLTask.dropTableOrPartitions(DDLTask.java:3983) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:341) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:162) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1765) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1506) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1303) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1170) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1165) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:197) > at > org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:76) > at > org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:255) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) > at > org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByExpr(Hive.java:2613) > at > org.apache.hadoop.hive.ql.exec.DDLTask.dropPartitions(DDLTask.java:4008) > ... 23 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19164) TestMetastoreVersion failures
[ https://issues.apache.org/jira/browse/HIVE-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445352#comment-16445352 ] Vineet Garg commented on HIVE-19164: Pushed to branch-3 as well since this fixes couple of metastore tests > TestMetastoreVersion failures > - > > Key: HIVE-19164 > URL: https://issues.apache.org/jira/browse/HIVE-19164 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Vineet Garg >Assignee: Vihang Karajgaonkar >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19164.02.patch, HIVE-19164.patch > > > Following tests are failing consistently and are reproducible on master: > * testVersionMatching > * testMetastoreVersion > I tried debugging it and found that ObjectStore.getMSSchemaVersion() throws > an exception {{No matching version found}}. > To fetch schema version this method executes {code:sql} SELECT FROM > org.apache.hadoop.hive.metastore.model.MVersionTable {code} but for whatever > reason execution returns empty result set resulting in the exception. Both > test failures are due to this exception. I tried debugging the query > execution but didn't really go nowhere with it. I suspect this could be due > to recent metastore changes. I tried reproducing this outside test but with > no success. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19164) TestMetastoreVersion failures
[ https://issues.apache.org/jira/browse/HIVE-19164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19164: --- Fix Version/s: 3.0.0 > TestMetastoreVersion failures > - > > Key: HIVE-19164 > URL: https://issues.apache.org/jira/browse/HIVE-19164 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Vineet Garg >Assignee: Vihang Karajgaonkar >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19164.02.patch, HIVE-19164.patch > > > Following tests are failing consistently and are reproducible on master: > * testVersionMatching > * testMetastoreVersion > I tried debugging it and found that ObjectStore.getMSSchemaVersion() throws > an exception {{No matching version found}}. > To fetch schema version this method executes {code:sql} SELECT FROM > org.apache.hadoop.hive.metastore.model.MVersionTable {code} but for whatever > reason execution returns empty result set resulting in the exception. Both > test failures are due to this exception. I tried debugging the query > execution but didn't really go nowhere with it. I suspect this could be due > to recent metastore changes. I tried reproducing this outside test but with > no success. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19255) Hive doesn't support column list specification in INSERT into statements with distribute by/Cluster by
[ https://issues.apache.org/jira/browse/HIVE-19255?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445351#comment-16445351 ] Riju Trivedi commented on HIVE-19255: - Query works using WITH clause too : with t1 as (SELECT col1, col2,col3 FROM source_table DISTRIBUTE BY col1 SORT BY col1,col2) INSERT into TABLE target_table_2 partition (col3) (col1, col2,col3) Select * from t1 ; > Hive doesn't support column list specification in INSERT into statements with > distribute by/Cluster by > --- > > Key: HIVE-19255 > URL: https://issues.apache.org/jira/browse/HIVE-19255 > Project: Hive > Issue Type: Bug > Components: Parser, Query Processor, SQL >Affects Versions: 1.2.0 >Reporter: Riju Trivedi >Priority: Major > > INSERT into TABLE target_table_2 partition (col3) (col1, col2,col3) > SELECT col1,col2,col3 > FROM source_table > DISTRIBUTE BY col1 > SORT BY col1,col2; > This Insert statement throws > Error: Error while compiling statement: FAILED: SemanticException [Error > 10004]: Line 4:14 Invalid table alias or column reference 'col1': > Query is executed successfully with below workaround: > INSERT into TABLE target_table_2 partition (col3) (col1, col2,col3) > select * From (SELECT col1, col2,col3 > FROM source_table > DISTRIBUTE BY col1 > SORT BY col1,col2) a; -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19240) backport HIVE-17645 to 3.0
[ https://issues.apache.org/jira/browse/HIVE-19240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19240: --- Fix Version/s: 3.0.0 > backport HIVE-17645 to 3.0 > -- > > Key: HIVE-19240 > URL: https://issues.apache.org/jira/browse/HIVE-19240 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Fix For: 3.0.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19240) backport HIVE-17645 to 3.0
[ https://issues.apache.org/jira/browse/HIVE-19240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445348#comment-16445348 ] Vineet Garg commented on HIVE-19240: [~ekoifman] Should I go ahead and cherry-pick HIVE-17645 or are you waiting on something? > backport HIVE-17645 to 3.0 > -- > > Key: HIVE-19240 > URL: https://issues.apache.org/jira/browse/HIVE-19240 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Fix For: 3.0.0 > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19157) Assert that Insert into Druid Table fails if the publishing of metadata by HS2 fails
[ https://issues.apache.org/jira/browse/HIVE-19157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445345#comment-16445345 ] Vineet Garg commented on HIVE-19157: [~ashutoshc] I am unable to find corresponding commit for this JIRA. Can you point me to it? > Assert that Insert into Druid Table fails if the publishing of metadata by > HS2 fails > > > Key: HIVE-19157 > URL: https://issues.apache.org/jira/browse/HIVE-19157 > Project: Hive > Issue Type: Bug >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19157.2.patch, HIVE-19157.3.patch, > HIVE-19157.4.patch, HIVE-19157.patch > > > The usual work flow of loading Data into Druid relies on the fact that HS2 is > able to load Segments metadata from HDFS that are produced by LLAP/TEZ works. > In some cases where HS2 is not able to perform `ls` on the HDFS path the > insert into query will return success and will not insert any data. > This bug was introduced at function {code} > org.apache.hadoop.hive.druid.DruidStorageHandlerUtils#getCreatedSegments{code} > > when we added feature to allow create empty tables. > {code} > try { > fss = fs.listStatus(taskDir); > } catch (FileNotFoundException e) { > // This is a CREATE TABLE statement or query executed for CTAS/INSERT > // did not produce any result. We do not need to do anything, this is > // expected behavior. > return publishedSegmentsBuilder.build(); > } > {code} > Am still looking for the way to fix this, [~jcamachorodriguez]/[~ashutoshc] > any idea what is the best way to detect that it is an empty create table > statement? > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (HIVE-19009) Retain and use runtime statistics during hs2 lifetime
[ https://issues.apache.org/jira/browse/HIVE-19009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19009: --- Comment: was deleted (was: [~ashutoshc] I am unable to find the commit for this jira in master. Can you point me to the commit?) > Retain and use runtime statistics during hs2 lifetime > - > > Key: HIVE-19009 > URL: https://issues.apache.org/jira/browse/HIVE-19009 > Project: Hive > Issue Type: Sub-task >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19009.01.patch, HIVE-19009.02.patch, > HIVE-19009.03.patch, HIVE-19009.04.patch, HIVE-19009.05.patch, > HIVE-19009.06.patch, HIVE-19009.06.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19009) Retain and use runtime statistics during hs2 lifetime
[ https://issues.apache.org/jira/browse/HIVE-19009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445344#comment-16445344 ] Vineet Garg commented on HIVE-19009: [~ashutoshc] I am unable to find the commit for this jira in master. Can you point me to the commit? > Retain and use runtime statistics during hs2 lifetime > - > > Key: HIVE-19009 > URL: https://issues.apache.org/jira/browse/HIVE-19009 > Project: Hive > Issue Type: Sub-task >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-19009.01.patch, HIVE-19009.02.patch, > HIVE-19009.03.patch, HIVE-19009.04.patch, HIVE-19009.05.patch, > HIVE-19009.06.patch, HIVE-19009.06.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19155) Day time saving cause Druid inserts to fail with org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping segments
[ https://issues.apache.org/jira/browse/HIVE-19155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445342#comment-16445342 ] Vineet Garg commented on HIVE-19155: Pushed to branch-3 > Day time saving cause Druid inserts to fail with > org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping > segments > - > > Key: HIVE-19155 > URL: https://issues.apache.org/jira/browse/HIVE-19155 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19155.patch > > > If you try to insert data around the daylight saving time hour the query > fails with following exception > {code} > 2018-04-10T11:24:58,836 ERROR [065fdaa2-85f9-4e49-adaf-3dc14d51be90 main] > exec.DDLTask: Failed > org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping > segments [2015-03-08T05:00:00.000Z/2015-03-09T05:00:00.000Z and > 2015-03-09T04:00:00.000Z/2015-03-10T04:00:00.000Z] with the same version > [2018-04-10T11:24:48.388-07:00] > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:914) > ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:919) > ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4831) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:394) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2443) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2114) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1797) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1538) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1532) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:204) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > [hive-cli-3.1.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > [hive-cli-3.1.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > [hive-cli-3.1.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-3.1.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1455) > [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1429) > [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:177) > [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104) > [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver(TestMiniDruidCliDriver.java:59) > [test-classes/:?] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_92] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[?:1.8.0_92] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_92] > {code} > You can reproduce this using the following DDL > {code} > create database druid_test; > use druid_test; > create table test_table(`timecolumn` timestamp, `userid` string, `num_l` > float); > insert into test_table values ('2015-03-08 00:00:00', 'i1-start', 4); > insert into test_table values ('2015-03-08 23:59:59', 'i1-end', 1); > inser
[jira] [Updated] (HIVE-19155) Day time saving cause Druid inserts to fail with org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping segments
[ https://issues.apache.org/jira/browse/HIVE-19155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19155: --- Fix Version/s: 3.0.0 > Day time saving cause Druid inserts to fail with > org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping > segments > - > > Key: HIVE-19155 > URL: https://issues.apache.org/jira/browse/HIVE-19155 > Project: Hive > Issue Type: Bug > Components: Druid integration >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19155.patch > > > If you try to insert data around the daylight saving time hour the query > fails with following exception > {code} > 2018-04-10T11:24:58,836 ERROR [065fdaa2-85f9-4e49-adaf-3dc14d51be90 main] > exec.DDLTask: Failed > org.apache.hadoop.hive.ql.metadata.HiveException: > org.apache.hive.druid.io.druid.java.util.common.UOE: Cannot add overlapping > segments [2015-03-08T05:00:00.000Z/2015-03-09T05:00:00.000Z and > 2015-03-09T04:00:00.000Z/2015-03-10T04:00:00.000Z] with the same version > [2018-04-10T11:24:48.388-07:00] > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:914) > ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:919) > ~[hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4831) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:394) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2443) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:2114) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1797) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1538) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1532) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:157) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:204) > [hive-exec-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > [hive-cli-3.1.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > [hive-cli-3.1.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > [hive-cli-3.1.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-3.1.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1455) > [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1429) > [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:177) > [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104) > [hive-it-util-3.1.0-SNAPSHOT.jar:3.1.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.TestMiniDruidCliDriver.testCliDriver(TestMiniDruidCliDriver.java:59) > [test-classes/:?] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_92] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[?:1.8.0_92] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_92] > {code} > You can reproduce this using the following DDL > {code} > create database druid_test; > use druid_test; > create table test_table(`timecolumn` timestamp, `userid` string, `num_l` > float); > insert into test_table values ('2015-03-08 00:00:00', 'i1-start', 4); > insert into test_table values ('2015-03-08 23:59:59', 'i1-end', 1); > insert into test_table values ('2015-03-09 00:00:00', 'i2
[jira] [Commented] (HIVE-17824) msck repair table should drop the missing partitions from metastore
[ https://issues.apache.org/jira/browse/HIVE-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445340#comment-16445340 ] Vihang Karajgaonkar commented on HIVE-17824: The patch is merged to 3.0.0 and 3.1.0 as of now. I think [~janulatha] wanted to merge this in branch-2 as well which is why we didn't resolve this JIRA yet. > msck repair table should drop the missing partitions from metastore > --- > > Key: HIVE-17824 > URL: https://issues.apache.org/jira/browse/HIVE-17824 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-17824.1.patch, HIVE-17824.2.patch, > HIVE-17824.3.patch, HIVE-17824.4.patch > > > {{msck repair table }} is often used in environments where the new > partitions are loaded as directories on HDFS or S3 and users want to create > the missing partitions in bulk. However, currently it only supports addition > of missing partitions. If there are any partitions which are present in > metastore but not on the FileSystem, it should also delete them so that it > truly repairs the table metadata. > We should be careful not to break backwards compatibility so we should either > introduce a new config or keyword to add support to delete unnecessary > partitions from the metastore. This way users who want the old behavior can > easily turn it off. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-17824) msck repair table should drop the missing partitions from metastore
[ https://issues.apache.org/jira/browse/HIVE-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-17824: --- Fix Version/s: 3.1.0 3.0.0 > msck repair table should drop the missing partitions from metastore > --- > > Key: HIVE-17824 > URL: https://issues.apache.org/jira/browse/HIVE-17824 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Janaki Lahorani >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-17824.1.patch, HIVE-17824.2.patch, > HIVE-17824.3.patch, HIVE-17824.4.patch > > > {{msck repair table }} is often used in environments where the new > partitions are loaded as directories on HDFS or S3 and users want to create > the missing partitions in bulk. However, currently it only supports addition > of missing partitions. If there are any partitions which are present in > metastore but not on the FileSystem, it should also delete them so that it > truly repairs the table metadata. > We should be careful not to break backwards compatibility so we should either > introduce a new config or keyword to add support to delete unnecessary > partitions from the metastore. This way users who want the old behavior can > easily turn it off. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18410) [Performance][Avro] Reading flat Avro tables is very expensive in Hive
[ https://issues.apache.org/jira/browse/HIVE-18410?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18410: --- Fix Version/s: 3.0.0 > [Performance][Avro] Reading flat Avro tables is very expensive in Hive > -- > > Key: HIVE-18410 > URL: https://issues.apache.org/jira/browse/HIVE-18410 > Project: Hive > Issue Type: Improvement >Affects Versions: 1.2.1, 2.1.0, 3.0.0, 2.3.2 >Reporter: Ratandeep Ratti >Assignee: Ratandeep Ratti >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-18410.patch, HIVE-18410_1.patch, > HIVE-18410_2.patch, HIVE-18410_3.patch, profiling_with_patch.nps, > profiling_with_patch.png, profiling_without_patch.nps, > profiling_without_patch.png > > > There's a performance penalty when reading flat [no nested fields] Avro > tables. When reading the same flat dataset in Pig, it takes half the time. > On profiling, a lot of time is spent in > {{AvroDeserializer.deserializeSingleItemNullableUnion()}}. The bulk of the > time is spent in GenericData.get().resolveUnion(), which calls > GenericData.getSchemaName(Object datum), which does a lot of instanceof > checks. This could be simplified with performance benefits. A approach is > described in this patch which almost halves the runtime. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18816) CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type
[ https://issues.apache.org/jira/browse/HIVE-18816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445338#comment-16445338 ] Vineet Garg commented on HIVE-18816: Pushed to branch-3 > CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type > -- > > Key: HIVE-18816 > URL: https://issues.apache.org/jira/browse/HIVE-18816 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Igor Kryvenko >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-18816.01.patch, HIVE-18816.02.patch > > > *Reproducer* > {code:sql} > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > CREATE TABLE table_acid(d int, tz timestamp with local time zone) > clustered by (d) into 2 buckets stored as orc TBLPROPERTIES > ('transactional'='true'); > {code} > *Error* > {code:sql} > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.IllegalArgumentException: > Unknown primitive type TIMESTAMPLOCALTZ > {code} > *Error stack* > {noformat} > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.IllegalArgumentException: Unknown primitive type TIMESTAMPLOCALTZ > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:906) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4788) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:389) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2314) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1985) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1687) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1438) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1427) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1345) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1319) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:173) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:59) > [test-classes/:?] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_101] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[?:1.8.0_101] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_101] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_101] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > [junit-4.11.jar:?] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.11.jar:?] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > [junit-4.11.jar:?] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.11.jar:?] > at > org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:92) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.junit.rules.RunRules.evaluate(RunRules.ja
[jira] [Updated] (HIVE-18816) CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type
[ https://issues.apache.org/jira/browse/HIVE-18816?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-18816: --- Fix Version/s: 3.0.0 > CREATE TABLE (ACID) doesn't work with TIMESTAMPLOCALTZ column type > -- > > Key: HIVE-18816 > URL: https://issues.apache.org/jira/browse/HIVE-18816 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Vineet Garg >Assignee: Igor Kryvenko >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-18816.01.patch, HIVE-18816.02.patch > > > *Reproducer* > {code:sql} > set hive.support.concurrency=true; > set hive.txn.manager=org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; > CREATE TABLE table_acid(d int, tz timestamp with local time zone) > clustered by (d) into 2 buckets stored as orc TBLPROPERTIES > ('transactional'='true'); > {code} > *Error* > {code:sql} > FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.DDLTask. java.lang.IllegalArgumentException: > Unknown primitive type TIMESTAMPLOCALTZ > {code} > *Error stack* > {noformat} > org.apache.hadoop.hive.ql.metadata.HiveException: > java.lang.IllegalArgumentException: Unknown primitive type TIMESTAMPLOCALTZ > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:906) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4788) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:389) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:205) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:97) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:2314) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1985) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1687) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1438) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1427) > [hive-exec-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:239) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:188) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:402) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:335) > [hive-cli-3.0.0-SNAPSHOT.jar:?] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClientInternal(QTestUtil.java:1345) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.QTestUtil.executeClient(QTestUtil.java:1319) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CoreCliDriver.runTest(CoreCliDriver.java:173) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.control.CliAdapter.runTest(CliAdapter.java:104) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at > org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver(TestMiniLlapLocalCliDriver.java:59) > [test-classes/:?] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[?:1.8.0_101] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[?:1.8.0_101] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[?:1.8.0_101] > at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_101] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > [junit-4.11.jar:?] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.11.jar:?] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > [junit-4.11.jar:?] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.11.jar:?] > at > org.apache.hadoop.hive.cli.control.CliAdapter$2$1.evaluate(CliAdapter.java:92) > [hive-it-util-3.0.0-SNAPSHOT.jar:3.0.0-SNAPSHOT] > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > [junit-4.11.jar:?] > at org.junit.ru
[jira] [Commented] (HIVE-18410) [Performance][Avro] Reading flat Avro tables is very expensive in Hive
[ https://issues.apache.org/jira/browse/HIVE-18410?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445339#comment-16445339 ] Vineet Garg commented on HIVE-18410: Pushed to branch-3 > [Performance][Avro] Reading flat Avro tables is very expensive in Hive > -- > > Key: HIVE-18410 > URL: https://issues.apache.org/jira/browse/HIVE-18410 > Project: Hive > Issue Type: Improvement >Affects Versions: 1.2.1, 2.1.0, 3.0.0, 2.3.2 >Reporter: Ratandeep Ratti >Assignee: Ratandeep Ratti >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-18410.patch, HIVE-18410_1.patch, > HIVE-18410_2.patch, HIVE-18410_3.patch, profiling_with_patch.nps, > profiling_with_patch.png, profiling_without_patch.nps, > profiling_without_patch.png > > > There's a performance penalty when reading flat [no nested fields] Avro > tables. When reading the same flat dataset in Pig, it takes half the time. > On profiling, a lot of time is spent in > {{AvroDeserializer.deserializeSingleItemNullableUnion()}}. The bulk of the > time is spent in GenericData.get().resolveUnion(), which calls > GenericData.getSchemaName(Object datum), which does a lot of instanceof > checks. This could be simplified with performance benefits. A approach is > described in this patch which almost halves the runtime. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19130) NPE is thrown when REPL LOAD applied drop partition event.
[ https://issues.apache.org/jira/browse/HIVE-19130?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19130: --- Fix Version/s: 3.0.0 > NPE is thrown when REPL LOAD applied drop partition event. > -- > > Key: HIVE-19130 > URL: https://issues.apache.org/jira/browse/HIVE-19130 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, Replication, pull-request-available > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-19130.01.patch > > > During incremental replication, if we split the events batch as follows, then > the REPL LOAD on second batch throws NPE. > Batch-1: CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> DROP_PARTITION (t1.p1) > Batch-2: DROP_TABLE(t1) -> CREATE_TABLE(t1) -> ADD_PARTITION(t1.p1) -> > DROP_PARTITION (t1.p1) > {code} > 2018-04-05 16:20:36,531 ERROR [HiveServer2-Background-Pool: Thread-107044]: > metadata.Hive (Hive.java:getTable(1219)) - Table catalog_sales_new not found: > new5_tpcds_real_bin_partitioned_orc_1000.catalog_sales_new table not found > 2018-04-05 16:20:36,538 ERROR [HiveServer2-Background-Pool: Thread-107044]: > exec.DDLTask (DDLTask.java:failed(540)) - > org.apache.hadoop.hive.ql.metadata.HiveException > at > org.apache.hadoop.hive.ql.exec.DDLTask.dropPartitions(DDLTask.java:4016) > at > org.apache.hadoop.hive.ql.exec.DDLTask.dropTableOrPartitions(DDLTask.java:3983) > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:341) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:162) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:89) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1765) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1506) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1303) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1170) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1165) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:197) > at > org.apache.hive.service.cli.operation.SQLOperation.access$300(SQLOperation.java:76) > at > org.apache.hive.service.cli.operation.SQLOperation$2$1.run(SQLOperation.java:255) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1869) > at > org.apache.hive.service.cli.operation.SQLOperation$2.run(SQLOperation.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.metadata.Hive.getPartitionsByExpr(Hive.java:2613) > at > org.apache.hadoop.hive.ql.exec.DDLTask.dropPartitions(DDLTask.java:4008) > ... 23 more > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17824) msck repair table should drop the missing partitions from metastore
[ https://issues.apache.org/jira/browse/HIVE-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445333#comment-16445333 ] Vineet Garg commented on HIVE-17824: [~vihangk1], [~janulatha] Can you add the appropriate fix version to this JIRA as well? > msck repair table should drop the missing partitions from metastore > --- > > Key: HIVE-17824 > URL: https://issues.apache.org/jira/browse/HIVE-17824 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Janaki Lahorani >Priority: Major > Attachments: HIVE-17824.1.patch, HIVE-17824.2.patch, > HIVE-17824.3.patch, HIVE-17824.4.patch > > > {{msck repair table }} is often used in environments where the new > partitions are loaded as directories on HDFS or S3 and users want to create > the missing partitions in bulk. However, currently it only supports addition > of missing partitions. If there are any partitions which are present in > metastore but not on the FileSystem, it should also delete them so that it > truly repairs the table metadata. > We should be careful not to break backwards compatibility so we should either > introduce a new config or keyword to add support to delete unnecessary > partitions from the metastore. This way users who want the old behavior can > easily turn it off. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19089) Create/Replicate Allocate write-id event
[ https://issues.apache.org/jira/browse/HIVE-19089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445331#comment-16445331 ] Vineet Garg commented on HIVE-19089: Please update the fix version to 3.0.0 if you are committing to branch-3 > Create/Replicate Allocate write-id event > > > Key: HIVE-19089 > URL: https://issues.apache.org/jira/browse/HIVE-19089 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: ACID, DR, replication > Fix For: 3.0.0 > > Attachments: HIVE-19089.01.patch, HIVE-19089.02.patch, > HIVE-19089.03.patch, HIVE-19089.04.patch, HIVE-19089.05.patch, > HIVE-19089.06.patch, HIVE-19089.07.patch, HIVE-19089.08.patch > > > *EVENT_ALLOCATE_WRITE_ID* > *Source Warehouse:* > * Create new event type EVENT_ALLOCATE_WRITE_ID with related message format > etc. > * Capture this event when allocate a table write ID from the sequence table > by ACID operation. > * Repl dump should read this event from EventNotificationTable and dump the > message. > *Target Warehouse:* > * Repl load should read the event from the dump and get the message. > * Validate if source txn ID from the event is there in the source-target txn > ID map. If not there, just noop the event. > * If valid, then Allocate table write ID from sequence table > *Extend listener notify event API to add two new parameter , dbconn and > sqlgenerator to add the events to notification_log table within the same > transaction* -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19089) Create/Replicate Allocate write-id event
[ https://issues.apache.org/jira/browse/HIVE-19089?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19089: --- Fix Version/s: (was: 3.1.0) 3.0.0 > Create/Replicate Allocate write-id event > > > Key: HIVE-19089 > URL: https://issues.apache.org/jira/browse/HIVE-19089 > Project: Hive > Issue Type: Sub-task > Components: repl, Transactions >Affects Versions: 3.0.0 >Reporter: mahesh kumar behera >Assignee: mahesh kumar behera >Priority: Major > Labels: ACID, DR, replication > Fix For: 3.0.0 > > Attachments: HIVE-19089.01.patch, HIVE-19089.02.patch, > HIVE-19089.03.patch, HIVE-19089.04.patch, HIVE-19089.05.patch, > HIVE-19089.06.patch, HIVE-19089.07.patch, HIVE-19089.08.patch > > > *EVENT_ALLOCATE_WRITE_ID* > *Source Warehouse:* > * Create new event type EVENT_ALLOCATE_WRITE_ID with related message format > etc. > * Capture this event when allocate a table write ID from the sequence table > by ACID operation. > * Repl dump should read this event from EventNotificationTable and dump the > message. > *Target Warehouse:* > * Repl load should read the event from the dump and get the message. > * Validate if source txn ID from the event is there in the source-target txn > ID map. If not there, just noop the event. > * If valid, then Allocate table write ID from sequence table > *Extend listener notify event API to add two new parameter , dbconn and > sqlgenerator to add the events to notification_log table within the same > transaction* -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19254) NumberFormatException in MetaStoreUtils.isFastStatsSame
[ https://issues.apache.org/jira/browse/HIVE-19254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar reassigned HIVE-19254: -- > NumberFormatException in MetaStoreUtils.isFastStatsSame > --- > > Key: HIVE-19254 > URL: https://issues.apache.org/jira/browse/HIVE-19254 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > > I see the following exception under some cases in the logs. This possibly > happens when you try to add empty partitions. > {noformat} > 2018-04-19T19:32:19,260 ERROR [pool-7-thread-7] metastore.RetryingHMSHandler: > MetaException(message:java.lang.NumberFormatException: null) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:6824) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:4864) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions(HiveMetaStore.java:4801) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:147) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:108) > at com.sun.proxy.$Proxy24.alter_partitions(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:16046) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$alter_partitions.getResult(ThriftHiveMetastore.java:16030) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:111) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:107) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1965) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:119) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.NumberFormatException: null > at java.lang.Long.parseLong(Long.java:552) > at java.lang.Long.parseLong(Long.java:631) > at > org.apache.hadoop.hive.metastore.utils.MetaStoreUtils.isFastStatsSame(MetaStoreUtils.java:632) > at > org.apache.hadoop.hive.metastore.HiveAlterHandler.alterPartitions(HiveAlterHandler.java:743) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.alter_partitions_with_environment_context(HiveMetaStore.java:4827) > ... 21 more > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19197) TestReplicationScenarios is flaky
[ https://issues.apache.org/jira/browse/HIVE-19197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19197: --- Fix Version/s: (was: 3.1.0) 3.0.0 > TestReplicationScenarios is flaky > - > > Key: HIVE-19197 > URL: https://issues.apache.org/jira/browse/HIVE-19197 > Project: Hive > Issue Type: Sub-task > Components: repl, Test >Affects Versions: 3.0.0 >Reporter: Ashutosh Chauhan >Assignee: Sankar Hariappan >Priority: Major > Labels: pull-request-available > Fix For: 3.0.0 > > Attachments: HIVE-19197.01.patch > > > Fails once in a while. > {code} > java.lang.AssertionError: expected:<1> but was:<0> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.verifyResults(TestReplicationScenarios.java:3629) > at > org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.verifyRun(TestReplicationScenarios.java:3711) > at > org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.verifyRun(TestReplicationScenarios.java:3706) > at > org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.verifyAndReturnTblReplStatus(TestReplicationScenarios.java:3600) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19196) TestTriggersMoveWorkloadManager is flaky
[ https://issues.apache.org/jira/browse/HIVE-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19196: --- Fix Version/s: 3.0.0 > TestTriggersMoveWorkloadManager is flaky > > > Key: HIVE-19196 > URL: https://issues.apache.org/jira/browse/HIVE-19196 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Ashutosh Chauhan >Assignee: Prasanth Jayachandran >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19196.1.patch > > > This is a flaky test which randomly fails. Consider improving its stability. > {code} > org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill > Failing for the past 1 build (Since Failed#10161 ) > Took 2.4 sec. > Error Message > '"eventType" : "GET"' expected in STDERR capture, but not found. > Stacktrace > java.lang.AssertionError: '"eventType" : "GET"' expected in STDERR capture, > but not found. > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hive.jdbc.AbstractJdbcTriggersTest.runQueryWithTrigger(AbstractJdbcTriggersTest.java:169) > at > org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill(TestTriggersMoveWorkloadManager.java:261) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19196) TestTriggersMoveWorkloadManager is flaky
[ https://issues.apache.org/jira/browse/HIVE-19196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445313#comment-16445313 ] Vineet Garg commented on HIVE-19196: Ptest queue was erased so manually started a job to get ptest run. > TestTriggersMoveWorkloadManager is flaky > > > Key: HIVE-19196 > URL: https://issues.apache.org/jira/browse/HIVE-19196 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Ashutosh Chauhan >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19196.1.patch > > > This is a flaky test which randomly fails. Consider improving its stability. > {code} > org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill > Failing for the past 1 build (Since Failed#10161 ) > Took 2.4 sec. > Error Message > '"eventType" : "GET"' expected in STDERR capture, but not found. > Stacktrace > java.lang.AssertionError: '"eventType" : "GET"' expected in STDERR capture, > but not found. > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at > org.apache.hive.jdbc.AbstractJdbcTriggersTest.runQueryWithTrigger(AbstractJdbcTriggersTest.java:169) > at > org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveConflictKill(TestTriggersMoveWorkloadManager.java:261) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19137) orcfiledump doesn't print hive.acid.version value
[ https://issues.apache.org/jira/browse/HIVE-19137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445312#comment-16445312 ] Hive QA commented on HIVE-19137: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 37s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 14s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 46s{color} | {color:red} ql: The patch generated 1 new + 278 unchanged - 5 fixed = 279 total (was 283) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10360/dev-support/hive-personality.sh | | git revision | master / 6c4adc9 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10360/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10360/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > orcfiledump doesn't print hive.acid.version value > - > > Key: HIVE-19137 > URL: https://issues.apache.org/jira/browse/HIVE-19137 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Igor Kryvenko >Priority: Major > Attachments: HIVE-19137.01.patch, HIVE-19137.02.patch, > HIVE-19137.03.patch, HIVE-19137.04.patch > > > HIVE-18659 added hive.acid.version in the file footer. > orcfiledump prints something like > {noformat} > User Metadata: > hive.acid.key.index=1,536870912,1; > hive.acid.stats=2,0,0 > hive.acid.version= > {noformat} > probably because > {noformat} > public static void setAcidVersionInDataFile(Writer writer) { > //so that we know which version wrote the file > ByteBuffer bf = ByteBuffer.allocate(4).putInt(ORC_ACID_VERSION); > bf.rewind(); //don't ask - some ByteBuffer weridness. w/o this, empty > buffer is written > writer.addUserMetadata(ACID_VERSION_KEY, bf); > } > {noformat} > use > {{UTF8.encode())}} instead -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19232) results_cache_invalidation2 is failing
[ https://issues.apache.org/jira/browse/HIVE-19232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445306#comment-16445306 ] Vineet Garg commented on HIVE-19232: TestCliDriver version is failing as well: https://builds.apache.org/job/PreCommit-HIVE-Build/10359/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_results_cache_invalidation2_/ Probably we should delete this one and keep it for TestMiniLlapLocal only. > results_cache_invalidation2 is failing > -- > > Key: HIVE-19232 > URL: https://issues.apache.org/jira/browse/HIVE-19232 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Ashutosh Chauhan >Assignee: Jason Dere >Priority: Major > > TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2] > Fails with plan difference on both cli as well as minillaplocal. Plan diffs > looks concerning since its now longer using cache. > Also, it should run only on minillaplocal -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19232) results_cache_invalidation2 is failing
[ https://issues.apache.org/jira/browse/HIVE-19232?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19232: --- Description: TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2] Fails with plan difference on both cli as well as minillaplocal. Plan diffs looks concerning since its now longer using cache. Also, it should run only on minillaplocal was: Fails with plan difference on both cli as well as minillaplocal. Plan diffs looks concerning since its now longer using cache. Also, it should run only on minillaplocal > results_cache_invalidation2 is failing > -- > > Key: HIVE-19232 > URL: https://issues.apache.org/jira/browse/HIVE-19232 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Ashutosh Chauhan >Assignee: Jason Dere >Priority: Major > > TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2] > Fails with plan difference on both cli as well as minillaplocal. Plan diffs > looks concerning since its now longer using cache. > Also, it should run only on minillaplocal -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19232) results_cache_invalidation2 is failing
[ https://issues.apache.org/jira/browse/HIVE-19232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445305#comment-16445305 ] Vineet Garg commented on HIVE-19232: Failing for branch-3 as well: https://builds.apache.org/job/PreCommit-HIVE-Build/10359/testReport/org.apache.hadoop.hive.cli/TestMiniLlapLocalCliDriver/testCliDriver_results_cache_invalidation2_/ [~jdere] Can you take a look please? > results_cache_invalidation2 is failing > -- > > Key: HIVE-19232 > URL: https://issues.apache.org/jira/browse/HIVE-19232 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Ashutosh Chauhan >Assignee: Jason Dere >Priority: Major > > TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2] > Fails with plan difference on both cli as well as minillaplocal. Plan diffs > looks concerning since its now longer using cache. > Also, it should run only on minillaplocal -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Resolved] (HIVE-19234) Fix flaky tests (see description)
[ https://issues.apache.org/jira/browse/HIVE-19234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg resolved HIVE-19234. Resolution: Duplicate > Fix flaky tests (see description) > - > > Key: HIVE-19234 > URL: https://issues.apache.org/jira/browse/HIVE-19234 > Project: Hive > Issue Type: Sub-task >Reporter: Deepak Jaiswal >Assignee: Deepak Jaiswal >Priority: Major > > org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] > is still failing with plan differences. Age : 161 > org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] fails with > exception. Going by the name it should not even run on TestCliDriver. > org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] : fails > with exception. Going by the name it should not even run on TestCliDriver. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Reopened] (HIVE-17055) Flaky test: TestMiniLlapCliDriver.testCliDriver[llap_smb]
[ https://issues.apache.org/jira/browse/HIVE-17055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg reopened HIVE-17055: Reopening. All three tests are still failing on branch-3: https://builds.apache.org/job/PreCommit-HIVE-Build/10359/testReport/ > Flaky test: TestMiniLlapCliDriver.testCliDriver[llap_smb] > - > > Key: HIVE-17055 > URL: https://issues.apache.org/jira/browse/HIVE-17055 > Project: Hive > Issue Type: Sub-task >Reporter: Janaki Lahorani >Assignee: Deepak Jaiswal >Priority: Major > > Following tests are also failing with same symptoms: > * TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] > * TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_main] > Client Execution succeeded but contained differences (error code = 1) after > executing llap_smb.q > 324,325c324,325 > < 2000 9 52 > < 2001 0 139630 > --- > > 2001 4 139630 > > 2001 6 52 -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint
[ https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19001: --- Attachment: HIVE-19001.6.patch > ALTER TABLE ADD CONSTRAINT support for CHECK constraint > --- > > Key: HIVE-19001 > URL: https://issues.apache.org/jira/browse/HIVE-19001 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch, > HIVE-19001.3.patch, HIVE-19001.4.patch, HIVE-19001.5.patch, HIVE-19001.6.patch > > > ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table > level) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint
[ https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19001: --- Status: Open (was: Patch Available) > ALTER TABLE ADD CONSTRAINT support for CHECK constraint > --- > > Key: HIVE-19001 > URL: https://issues.apache.org/jira/browse/HIVE-19001 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch, > HIVE-19001.3.patch, HIVE-19001.4.patch, HIVE-19001.5.patch, HIVE-19001.6.patch > > > ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table > level) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint
[ https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vineet Garg updated HIVE-19001: --- Status: Patch Available (was: Open) > ALTER TABLE ADD CONSTRAINT support for CHECK constraint > --- > > Key: HIVE-19001 > URL: https://issues.apache.org/jira/browse/HIVE-19001 > Project: Hive > Issue Type: Improvement > Components: Hive >Affects Versions: 3.0.0 >Reporter: Aswathy Chellammal Sreekumar >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19001.1.patch, HIVE-19001.2.patch, > HIVE-19001.3.patch, HIVE-19001.4.patch, HIVE-19001.5.patch, HIVE-19001.6.patch > > > ALTER TABLE ADD CONSTRAINT should be able to add CHECK constraint (table > level) -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19184) Hive 3.0.0 release branch preparation
[ https://issues.apache.org/jira/browse/HIVE-19184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445292#comment-16445292 ] Hive QA commented on HIVE-19184: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12919517/HIVE-19184.01-branch-3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 96 failed/errored test(s), 13744 tests executed *Failed tests:* {noformat} TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95) [nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,check_constraint_subquery.q,load_wrong_fileformat.q,check_constraint_udtf.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,check_constraint_nonboolean_expr.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_uri_add_partition.q,authorization_view_disable_cbo_3.q,bad_exec_hooks.q,authorization_view_disable_cbo_2.q,fetchtask_ioexception.q,char_pad_convert_fail2.q,authorization_set_role_neg1.
[jira] [Commented] (HIVE-19239) Check for possible null timestamp fields during SerDe from Druid events
[ https://issues.apache.org/jira/browse/HIVE-19239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445278#comment-16445278 ] Ashutosh Chauhan commented on HIVE-19239: - This is the timestamp value read from Druid, correct? Can that ever be null ? In what cases? > Check for possible null timestamp fields during SerDe from Druid events > --- > > Key: HIVE-19239 > URL: https://issues.apache.org/jira/browse/HIVE-19239 > Project: Hive > Issue Type: Bug >Reporter: slim bouguerra >Assignee: slim bouguerra >Priority: Major > Attachments: HIVE-19239.patch > > > Currently we do not check for possible null timestamp events. > This might lead to NPE. > This Patch add addition check for such case. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19184) Hive 3.0.0 release branch preparation
[ https://issues.apache.org/jira/browse/HIVE-19184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445212#comment-16445212 ] Hive QA commented on HIVE-19184: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 36s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 1m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10359/dev-support/hive-personality.sh | | git revision | master / 6c4adc9 | | modules | C: . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10359/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Hive 3.0.0 release branch preparation > - > > Key: HIVE-19184 > URL: https://issues.apache.org/jira/browse/HIVE-19184 > Project: Hive > Issue Type: Task >Reporter: Vineet Garg >Assignee: Vineet Garg >Priority: Major > Fix For: 3.0.0 > > Attachments: HIVE-19184.01-branch-3.patch > > > Need to do bunch of things to prepare branch-3 for release e.g. > * Update pom.xml to delete SNAPSHOT > * Update .reviewboardrc > * Remove storage-api module to build > * Change storage-api depdency etc -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19001) ALTER TABLE ADD CONSTRAINT support for CHECK constraint
[ https://issues.apache.org/jira/browse/HIVE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445209#comment-16445209 ] Hive QA commented on HIVE-19001: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12919725/HIVE-19001.5.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10358/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10358/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10358/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2018-04-20 03:04:14.032 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-10358/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2018-04-20 03:04:14.042 + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 92b9ba7..6c4adc9 master -> origin/master 8584947..a3e535f branch-3 -> origin/branch-3 + git reset --hard HEAD HEAD is now at 92b9ba7 HIVE-19242 : CliAdapter silently ignores excluded qfiles (Vihang Karajgaonkar, reviewed by Sahil Takiar) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at 6c4adc9 HIVE-19219: Incremental REPL DUMP should throw error if requested events are cleaned-up (Sankar Hariappan, reviewed by Mahesh Kumar Behera, Thejas M Nair) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2018-04-20 03:04:24.552 + rm -rf ../yetus_PreCommit-HIVE-Build-10358 + mkdir ../yetus_PreCommit-HIVE-Build-10358 + git gc + cp -R . ../yetus_PreCommit-HIVE-Build-10358 + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-10358/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/BaseSemanticAnalyzer.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/DDLSemanticAnalyzer.java: does not exist in index error: a/ql/src/java/org/apache/hadoop/hive/ql/parse/HiveParser.g: does not exist in index error: a/ql/src/test/queries/clientpositive/check_constraint.q: does not exist in index error: a/ql/src/test/results/clientpositive/llap/check_constraint.q.out: does not exist in index error: a/standalone-metastore/src/main/java/org/apache/hadoop/hive/metastore/ObjectStore.java: does not exist in index error: a/standalone-metastore/src/main/sql/derby/hive-schema-3.0.0.derby.sql: does not exist in index error: a/standalone-metastore/src/main/sql/derby/upgrade-2.3.0-to-3.0.0.derby.sql: does not exist in index error: a/standalone-metastore/src/main/sql/mssql/hive-schema-3.0.0.mssql.sql: does not exist in index error: a/standalone-metastore/src/main/sql/mssql/upgrade-2.3.0-to-3.0.0.mssql.sql: does not exist in index error: a/standalone-metastore/src/main/sql/mysql/hive-schema-3.0.0.mysql.sql: does not exist in index error: a/standalone-metastore/src/main/sql/mysql/upgrade-2.3.0-to-3.0.0.mysql.sql: does not exist in index error: a/standalone-metastore/src/main/sql/oracle/hive-schema-3.0.0.oracle.sql: does not exist in index error: a/standalone-metastore/src/main/sql/oracle/upgrade-2.3.0-to-3.0.0.oracle.sql: does not exist in index error: a/standalone-metastore/src/main/sql/postgres/hive-schema-3.0.0.postgres.sql: does not exist in index error: a/standalone-metastore/src/main/sql/postgres/upgrade-2.3.0-to-3.0.0.postgres.sql: does not exist in index Going to apply patch with:
[jira] [Commented] (HIVE-19226) Extend storage-api to print timestamp values in UTC
[ https://issues.apache.org/jira/browse/HIVE-19226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445206#comment-16445206 ] Hive QA commented on HIVE-19226: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12919847/HIVE-19226.01.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 26 failed/errored test(s), 14279 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] (batchId=54) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=171) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=225) org.apache.hadoop.hive.metastore.TestStats.partitionedTableDeprecatedCalls (batchId=211) org.apache.hadoop.hive.metastore.TestStats.partitionedTableInHiveCatalog (batchId=211) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.exec.vector.TestStructColumnVector.testStringify2 (batchId=192) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgress (batchId=235) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel (batchId=235) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveBackKill (batchId=242) org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.testCookieNegative (batchId=254) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10357/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10357/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10357/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 26 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12919847 - PreCommit-HIVE-Build > Extend storage-api to print timestamp values in UTC > --- > > Key: HIVE-19226 > URL: https://issues.apache.org/jira/browse/HIVE-19226 > Project: Hive > Issue Type: Bug > Components: storage-api >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19226.01.patch, HIVE-19226.patch > > > Related to HIVE-12192. Create new method that prints values in UTC. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445204#comment-16445204 ] Sankar Hariappan commented on HIVE-19219: - Attached branch-3 version of the patch for ptest. > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01-branch-3.patch, HIVE-19219.01.patch, > HIVE-19219.02.patch, HIVE-19219.03.patch, HIVE-19219.04.patch, > HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-19219: Status: Open (was: Patch Available) > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01-branch-3.patch, HIVE-19219.01.patch, > HIVE-19219.02.patch, HIVE-19219.03.patch, HIVE-19219.04.patch, > HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-19219: Status: Patch Available (was: Open) > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01-branch-3.patch, HIVE-19219.01.patch, > HIVE-19219.02.patch, HIVE-19219.03.patch, HIVE-19219.04.patch, > HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-19219: Attachment: (was: HIVE-19219.01-branch-3.patch) > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01-branch-3.patch, HIVE-19219.01.patch, > HIVE-19219.02.patch, HIVE-19219.03.patch, HIVE-19219.04.patch, > HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-19219: Attachment: HIVE-19219.01-branch-3.patch > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01-branch-3.patch, HIVE-19219.01.patch, > HIVE-19219.02.patch, HIVE-19219.03.patch, HIVE-19219.04.patch, > HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-19219: Attachment: HIVE-19219.01-branch-3.patch > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01-branch-3.patch, HIVE-19219.01.patch, > HIVE-19219.02.patch, HIVE-19219.03.patch, HIVE-19219.04.patch, > HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445190#comment-16445190 ] Sankar Hariappan commented on HIVE-19219: - Test failures are locally tested and all passed. 05.patch is committed to master. Thanks for the review [~thejas], [~maheshk114]! > Incremental REPL DUMP should throw error if requested events are cleaned-up. > > > Key: HIVE-19219 > URL: https://issues.apache.org/jira/browse/HIVE-19219 > Project: Hive > Issue Type: Bug > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan >Priority: Major > Labels: DR, pull-request-available, replication > Fix For: 3.1.0 > > Attachments: HIVE-19219.01.patch, HIVE-19219.02.patch, > HIVE-19219.03.patch, HIVE-19219.04.patch, HIVE-19219.05.patch > > > This is the case where the events were deleted on source because of old event > purging and hence min(source event id) > target event id (last replicated > event id). > Repl dump should fail in this case so that user can drop the database and > bootstrap again. > Cleaner thread is concurrently removing the expired events from > NOTIFICATION_LOG table. So, it is necessary to check if the current dump > missed any event while dumping. After fetching events in batches, we shall > check if it is fetched in contiguous sequence of event id. If it is not in > contiguous sequence, then likely some events missed in the dump and hence > throw error. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-18739) Add support for Import/Export from Acid table
[ https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18739: -- Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) > Add support for Import/Export from Acid table > - > > Key: HIVE-18739 > URL: https://issues.apache.org/jira/browse/HIVE-18739 > Project: Hive > Issue Type: New Feature > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Fix For: 3.0.0, 3.1.0 > > Attachments: HIVE-18739.01-branch-3.patch, HIVE-18739.01.patch, > HIVE-18739.02-branch-3.patch, HIVE-18739.04.patch, HIVE-18739.06.patch, > HIVE-18739.08.patch, HIVE-18739.09.patch, HIVE-18739.10.patch, > HIVE-18739.11.patch, HIVE-18739.12.patch, HIVE-18739.13.patch, > HIVE-18739.14.patch, HIVE-18739.15.patch, HIVE-18739.16.patch, > HIVE-18739.17.patch, HIVE-18739.19.patch, HIVE-18739.20.patch, > HIVE-18739.21.patch, HIVE-18739.23.patch, HIVE-18739.24.patch, > HIVE-18739.25.patch, HIVE-18739.26.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18739) Add support for Import/Export from Acid table
[ https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445144#comment-16445144 ] Eugene Koifman commented on HIVE-18739: --- no related failures for branch-3 - committed > Add support for Import/Export from Acid table > - > > Key: HIVE-18739 > URL: https://issues.apache.org/jira/browse/HIVE-18739 > Project: Hive > Issue Type: New Feature > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-18739.01-branch-3.patch, HIVE-18739.01.patch, > HIVE-18739.02-branch-3.patch, HIVE-18739.04.patch, HIVE-18739.06.patch, > HIVE-18739.08.patch, HIVE-18739.09.patch, HIVE-18739.10.patch, > HIVE-18739.11.patch, HIVE-18739.12.patch, HIVE-18739.13.patch, > HIVE-18739.14.patch, HIVE-18739.15.patch, HIVE-18739.16.patch, > HIVE-18739.17.patch, HIVE-18739.19.patch, HIVE-18739.20.patch, > HIVE-18739.21.patch, HIVE-18739.23.patch, HIVE-18739.24.patch, > HIVE-18739.25.patch, HIVE-18739.26.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19251) ObjectStore.getNextNotification with LIMIT should use less memory
[ https://issues.apache.org/jira/browse/HIVE-19251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445143#comment-16445143 ] Alexander Kolbasov commented on HIVE-19251: --- [~spena] [~kkalyan] [~lina...@cloudera.com] FYI. I think this affects Sentry which always passes -1. > ObjectStore.getNextNotification with LIMIT should use less memory > - > > Key: HIVE-19251 > URL: https://issues.apache.org/jira/browse/HIVE-19251 > Project: Hive > Issue Type: Bug > Components: repl, Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19251.1.patch > > > Experience OOM when Hive metastore try to retrieve huge amount of > notification logs even there's limit clause. Hive shall only retrieve > necessary rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19251) ObjectStore.getNextNotification with LIMIT should use less memory
[ https://issues.apache.org/jira/browse/HIVE-19251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445142#comment-16445142 ] Alexander Kolbasov commented on HIVE-19251: --- I think this patch is fine, but in the long term we should always limit the number of returned events even if -1 is specified (or some big number) Consumers can always ask for more. > ObjectStore.getNextNotification with LIMIT should use less memory > - > > Key: HIVE-19251 > URL: https://issues.apache.org/jira/browse/HIVE-19251 > Project: Hive > Issue Type: Bug > Components: repl, Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19251.1.patch > > > Experience OOM when Hive metastore try to retrieve huge amount of > notification logs even there's limit clause. Hive shall only retrieve > necessary rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19253) HMS ignores tableType property for external tables
[ https://issues.apache.org/jira/browse/HIVE-19253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445139#comment-16445139 ] Alexander Kolbasov commented on HIVE-19253: --- [~vihangk1] [~hahao] FYI > HMS ignores tableType property for external tables > -- > > Key: HIVE-19253 > URL: https://issues.apache.org/jira/browse/HIVE-19253 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.0.2, 3.0.0, 3.1.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > > When someone creates a table using Thrift API they may think that setting > tableType to {{EXTERNAL_TABLE}} creates an external table. And boom - their > table is gone later because HMS will silently change it to managed table. > here is the offending code: > {code:java} > private MTable convertToMTable(Table tbl) throws InvalidObjectException, > MetaException { > ... > // If the table has property EXTERNAL set, update table type > // accordingly > String tableType = tbl.getTableType(); > boolean isExternal = > Boolean.parseBoolean(tbl.getParameters().get("EXTERNAL")); > if (TableType.MANAGED_TABLE.toString().equals(tableType)) { > if (isExternal) { > tableType = TableType.EXTERNAL_TABLE.toString(); > } > } > if (TableType.EXTERNAL_TABLE.toString().equals(tableType)) { > if (!isExternal) { // Here! > tableType = TableType.MANAGED_TABLE.toString(); > } > } > {code} > So if the EXTERNAL parameter is not set, table type is changed to managed > even if it was external in the first place - which is wrong. > More over, in other places code looks at the table property to decide table > type and some places look at parameter. HMS should really make its mind which > one to use. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19253) HMS ignores tableType property for external tables
[ https://issues.apache.org/jira/browse/HIVE-19253?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445140#comment-16445140 ] Alexander Kolbasov commented on HIVE-19253: --- This goes back to HIVE-1329 change. > HMS ignores tableType property for external tables > -- > > Key: HIVE-19253 > URL: https://issues.apache.org/jira/browse/HIVE-19253 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.0.2, 3.0.0, 3.1.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > > When someone creates a table using Thrift API they may think that setting > tableType to {{EXTERNAL_TABLE}} creates an external table. And boom - their > table is gone later because HMS will silently change it to managed table. > here is the offending code: > {code:java} > private MTable convertToMTable(Table tbl) throws InvalidObjectException, > MetaException { > ... > // If the table has property EXTERNAL set, update table type > // accordingly > String tableType = tbl.getTableType(); > boolean isExternal = > Boolean.parseBoolean(tbl.getParameters().get("EXTERNAL")); > if (TableType.MANAGED_TABLE.toString().equals(tableType)) { > if (isExternal) { > tableType = TableType.EXTERNAL_TABLE.toString(); > } > } > if (TableType.EXTERNAL_TABLE.toString().equals(tableType)) { > if (!isExternal) { // Here! > tableType = TableType.MANAGED_TABLE.toString(); > } > } > {code} > So if the EXTERNAL parameter is not set, table type is changed to managed > even if it was external in the first place - which is wrong. > More over, in other places code looks at the table property to decide table > type and some places look at parameter. HMS should really make its mind which > one to use. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-16324) Truncate table should not work when EXTERNAL property of table is true
[ https://issues.apache.org/jira/browse/HIVE-16324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-16324: -- [~vihangk1] [~hahao] FYI. > Truncate table should not work when EXTERNAL property of table is true > -- > > Key: HIVE-16324 > URL: https://issues.apache.org/jira/browse/HIVE-16324 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Fix For: 3.0.0, 2.4.0 > > Attachments: HIVE-16324.01.patch > > > Currently if you create an external table using the command {{CREATE EXTERNAL > TABLE table_name}} the {{TRUNCATE table table_name}} command fails as > expected because only managed tables should be allowed to be truncated. > But if you set the external property of a previously managed table using > {{ALTER TABLE table_name SET TBLPROPERTIES('EXTERNAL'='true')}}, truncate > table command does not object and deletes all the data from the external > table. > Eg: This works but it should not .. > {noformat} > 0: jdbc:hive2://localhost:1/default> create table test_ext2 (col1 string); > No rows affected (0.424 seconds) > 0: jdbc:hive2://localhost:1/default> alter table test_ext2 set > tblproperties ('EXTERNAL'='true'); > No rows affected (0.149 seconds) > 0: jdbc:hive2://localhost:1/default> insert into table test_ext2 values > ("test"); > WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the > future versions. Consider using a different execution engine (i.e. spark, > tez) or using Hive 1.X releases. > No rows affected (3.447 seconds) > 0: jdbc:hive2://localhost:1/default> > 0: jdbc:hive2://localhost:1/default> > 0: jdbc:hive2://localhost:1/default> select * from test_ext2; > +-+ > | test_ext2.col1 | > +-+ > | test| > +-+ > 1 row selected (0.147 seconds) > 0: jdbc:hive2://localhost:1/default> truncate table test_ext2; > No rows affected (0.138 seconds) > 0: jdbc:hive2://localhost:1/default> select * from test_ext2; > +-+ > | test_ext2.col1 | > +-+ > +-+ > No rows selected (0.134 seconds) > 0: jdbc:hive2://localhost:1/default> > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Issue Comment Deleted] (HIVE-16324) Truncate table should not work when EXTERNAL property of table is true
[ https://issues.apache.org/jira/browse/HIVE-16324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov updated HIVE-16324: -- Comment: was deleted (was: [~vihangk1] [~hahao] FYI.) > Truncate table should not work when EXTERNAL property of table is true > -- > > Key: HIVE-16324 > URL: https://issues.apache.org/jira/browse/HIVE-16324 > Project: Hive > Issue Type: Bug >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Major > Fix For: 3.0.0, 2.4.0 > > Attachments: HIVE-16324.01.patch > > > Currently if you create an external table using the command {{CREATE EXTERNAL > TABLE table_name}} the {{TRUNCATE table table_name}} command fails as > expected because only managed tables should be allowed to be truncated. > But if you set the external property of a previously managed table using > {{ALTER TABLE table_name SET TBLPROPERTIES('EXTERNAL'='true')}}, truncate > table command does not object and deletes all the data from the external > table. > Eg: This works but it should not .. > {noformat} > 0: jdbc:hive2://localhost:1/default> create table test_ext2 (col1 string); > No rows affected (0.424 seconds) > 0: jdbc:hive2://localhost:1/default> alter table test_ext2 set > tblproperties ('EXTERNAL'='true'); > No rows affected (0.149 seconds) > 0: jdbc:hive2://localhost:1/default> insert into table test_ext2 values > ("test"); > WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the > future versions. Consider using a different execution engine (i.e. spark, > tez) or using Hive 1.X releases. > No rows affected (3.447 seconds) > 0: jdbc:hive2://localhost:1/default> > 0: jdbc:hive2://localhost:1/default> > 0: jdbc:hive2://localhost:1/default> select * from test_ext2; > +-+ > | test_ext2.col1 | > +-+ > | test| > +-+ > 1 row selected (0.147 seconds) > 0: jdbc:hive2://localhost:1/default> truncate table test_ext2; > No rows affected (0.138 seconds) > 0: jdbc:hive2://localhost:1/default> select * from test_ext2; > +-+ > | test_ext2.col1 | > +-+ > +-+ > No rows selected (0.134 seconds) > 0: jdbc:hive2://localhost:1/default> > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19253) HMS ignores tableType property for external tables
[ https://issues.apache.org/jira/browse/HIVE-19253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alexander Kolbasov reassigned HIVE-19253: - > HMS ignores tableType property for external tables > -- > > Key: HIVE-19253 > URL: https://issues.apache.org/jira/browse/HIVE-19253 > Project: Hive > Issue Type: Bug > Components: Metastore >Affects Versions: 2.0.2, 3.0.0, 3.1.0 >Reporter: Alexander Kolbasov >Assignee: Alexander Kolbasov >Priority: Major > > When someone creates a table using Thrift API they may think that setting > tableType to {{EXTERNAL_TABLE}} creates an external table. And boom - their > table is gone later because HMS will silently change it to managed table. > here is the offending code: > {code:java} > private MTable convertToMTable(Table tbl) throws InvalidObjectException, > MetaException { > ... > // If the table has property EXTERNAL set, update table type > // accordingly > String tableType = tbl.getTableType(); > boolean isExternal = > Boolean.parseBoolean(tbl.getParameters().get("EXTERNAL")); > if (TableType.MANAGED_TABLE.toString().equals(tableType)) { > if (isExternal) { > tableType = TableType.EXTERNAL_TABLE.toString(); > } > } > if (TableType.EXTERNAL_TABLE.toString().equals(tableType)) { > if (!isExternal) { // Here! > tableType = TableType.MANAGED_TABLE.toString(); > } > } > {code} > So if the EXTERNAL parameter is not set, table type is changed to managed > even if it was external in the first place - which is wrong. > More over, in other places code looks at the table property to decide table > type and some places look at parameter. HMS should really make its mind which > one to use. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19226) Extend storage-api to print timestamp values in UTC
[ https://issues.apache.org/jira/browse/HIVE-19226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445128#comment-16445128 ] Hive QA commented on HIVE-19226: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 56s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 12s{color} | {color:red} storage-api: The patch generated 2 new + 35 unchanged - 2 fixed = 37 total (was 37) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10357/dev-support/hive-personality.sh | | git revision | master / 92b9ba7 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-10357/yetus/diff-checkstyle-storage-api.txt | | modules | C: storage-api U: storage-api | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10357/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Extend storage-api to print timestamp values in UTC > --- > > Key: HIVE-19226 > URL: https://issues.apache.org/jira/browse/HIVE-19226 > Project: Hive > Issue Type: Bug > Components: storage-api >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19226.01.patch, HIVE-19226.patch > > > Related to HIVE-12192. Create new method that prints values in UTC. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19214) High throughput ingest ORC format
[ https://issues.apache.org/jira/browse/HIVE-19214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445108#comment-16445108 ] Prasanth Jayachandran commented on HIVE-19214: -- Make sense. Will put this under a config. Default disabled for non-streaming and default enabled for streaming cases. > High throughput ingest ORC format > - > > Key: HIVE-19214 > URL: https://issues.apache.org/jira/browse/HIVE-19214 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Gopal V >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19214.1.patch, HIVE-19214.2.patch > > > Create delta files with all ORC overhead disabled (no index, no compression, > no dictionary). Compactor will recreate the orc files with index, compression > and dictionary encoding. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19214) High throughput ingest ORC format
[ https://issues.apache.org/jira/browse/HIVE-19214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445105#comment-16445105 ] Eugene Koifman commented on HIVE-19214: --- SDPO should work, though that is orthogonal. If you have a simple Insert into an acid table, this would likely make reads suboptimal and so requires much more frequent compactions. This may be justified in streaming (esp w/dynamic partitions) but it's not in all cases. For example, if you have a table that is infrequently written to by large insert statements it may not need any compactions at all (unless you make this change as is) . There should be some config that can be set control this at least at table level though would be better per statement/connection. > High throughput ingest ORC format > - > > Key: HIVE-19214 > URL: https://issues.apache.org/jira/browse/HIVE-19214 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Gopal V >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19214.1.patch, HIVE-19214.2.patch > > > Create delta files with all ORC overhead disabled (no index, no compression, > no dictionary). Compactor will recreate the orc files with index, compression > and dictionary encoding. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19124) implement a basic major compactor for MM tables
[ https://issues.apache.org/jira/browse/HIVE-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-19124: Attachment: HIVE-19124.04.patch > implement a basic major compactor for MM tables > --- > > Key: HIVE-19124 > URL: https://issues.apache.org/jira/browse/HIVE-19124 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Labels: mm-gap-2 > Attachments: HIVE-19124.01.patch, HIVE-19124.02.patch, > HIVE-19124.03.patch, HIVE-19124.03.patch, HIVE-19124.04.patch, > HIVE-19124.patch > > > For now, it will run a query directly and only major compactions will be > supported. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18739) Add support for Import/Export from Acid table
[ https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445099#comment-16445099 ] Hive QA commented on HIVE-18739: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12919840/HIVE-18739.02-branch-3.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 25 failed/errored test(s), 13347 tests executed *Failed tests:* {noformat} TestBeeLineDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestDummy - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestMiniDruidCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestMiniDruidKafkaCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=253) TestNegativeCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=95) [nopart_insert.q,insert_into_with_schema.q,input41.q,having1.q,create_table_failure3.q,default_constraint_invalid_default_value.q,database_drop_not_empty_restrict.q,windowing_after_orderby.q,orderbysortby.q,subquery_select_distinct2.q,authorization_uri_alterpart_loc.q,udf_last_day_error_1.q,constraint_duplicate_name.q,create_table_failure4.q,alter_tableprops_external_with_notnull_constraint.q,semijoin5.q,udf_format_number_wrong4.q,deletejar.q,exim_11_nonpart_noncompat_sorting.q,show_tables_bad_db2.q,drop_func_nonexistent.q,nopart_load.q,alter_table_non_partitioned_table_cascade.q,check_constraint_subquery.q,load_wrong_fileformat.q,check_constraint_udtf.q,lockneg_try_db_lock_conflict.q,udf_field_wrong_args_len.q,create_table_failure2.q,create_with_fk_constraints_enforced.q,groupby2_map_skew_multi_distinct.q,authorization_update_noupdatepriv.q,show_columns2.q,authorization_insert_noselectpriv.q,orc_replace_columns3_acid.q,compare_double_bigint.q,authorization_set_nonexistent_conf.q,alter_rename_partition_failure3.q,split_sample_wrong_format2.q,create_with_fk_pk_same_tab.q,compare_double_bigint_2.q,authorization_show_roles_no_admin.q,materialized_view_authorization_rebuild_no_grant.q,unionLimit.q,authorization_revoke_table_fail2.q,duplicate_insert3.q,authorization_desc_table_nosel.q,stats_noscan_non_native.q,orc_change_serde_acid.q,create_or_replace_view7.q,exim_07_nonpart_noncompat_ifof.q,create_with_unique_constraints_enforced.q,udf_concat_ws_wrong2.q,fileformat_bad_class.q,merge_negative_2.q,exim_15_part_nonpart.q,authorization_not_owner_drop_view.q,external1.q,authorization_uri_insert.q,create_with_fk_wrong_ref.q,columnstats_tbllvl_incorrect_column.q,authorization_show_parts_nosel.q,authorization_not_owner_drop_tab.q,external2.q,authorization_deletejar.q,temp_table_create_like_partitions.q,udf_greatest_error_1.q,ptf_negative_AggrFuncsWithNoGBYNoPartDef.q,alter_view_as_select_not_exist.q,touch1.q,groupby3_map_skew_multi_distinct.q,insert_into_notnull_constraint.q,exchange_partition_neg_partition_missing.q,groupby_cube_multi_gby.q,columnstats_tbllvl.q,drop_invalid_constraint2.q,alter_table_add_partition.q,update_not_acid.q,archive5.q,alter_table_constraint_invalid_pk_col.q,ivyDownload.q,udf_instr_wrong_type.q,bad_sample_clause.q,authorization_not_owner_drop_tab2.q,authorization_alter_db_owner.q,show_columns1.q,orc_type_promotion3.q,create_view_failure8.q,strict_join.q,udf_add_months_error_1.q,groupby_cube2.q,groupby_cube1.q,groupby_rollup1.q,genericFileFormat.q,invalid_cast_from_binary_4.q,drop_invalid_constraint1.q,serde_regex.q,show_partitions1.q,check_constraint_nonboolean_expr.q,invalid_cast_from_binary_6.q,create_with_multi_pk_constraint.q,udf_field_wrong_type.q,groupby_grouping_sets4.q,groupby_grouping_sets3.q,insertsel_fail.q,udf_locate_wrong_type.q,orc_type_promotion1_acid.q,set_table_property.q,create_or_replace_view2.q,groupby_grouping_sets2.q,alter_view_failure.q,distinct_windowing_failure1.q,invalid_t_alter2.q,alter_table_constraint_invalid_fk_col1.q,invalid_varchar_length_2.q,authorization_show_grant_otheruser_alltabs.q,subquery_windowing_corr.q,compact_non_acid_table.q,authorization_view_4.q,authorization_disallow_transform.q,materialized_view_authorization_rebuild_other.q,authorization_fail_4.q,dbtxnmgr_nodblock.q,set_hiveconf_internal_variable1.q,input_part0_neg.q,udf_printf_wrong3.q,load_orc_negative2.q,druid_buckets.q,archive2.q,authorization_addjar.q,invalid_sum_syntax.q,insert_into_with_schema1.q,udf_add_months_error_2.q,dyn_part_max_per_node.q,authorization_revoke_table_fail1.q,udf_printf_wrong2.q,archive_multi3.q,udf_printf_wrong1.q,subquery_subquery_chain.q,authorization_view_disable_cbo_4.q,no_matching_udf.q,create_view_failure7.q,drop_native_udf.q,truncate_column_list_bucketing.q,authorization_uri_add_partition.q,authorization_view_disable_cbo_3.q,bad_exec_hooks.q,authorization_view_disable_cbo_2.q,fetchtask_ioexception.q,char_pad_convert_fail2.q,authorization_set_role_ne
[jira] [Commented] (HIVE-19251) ObjectStore.getNextNotification with LIMIT should use less memory
[ https://issues.apache.org/jira/browse/HIVE-19251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445075#comment-16445075 ] Vihang Karajgaonkar commented on HIVE-19251: That sounds good. This patch is useful in the case the clients are giving a maxEvents in the request. If they are giving a negative there is a still a likelihood of OOM in my opinion. Thanks for the patch. +1 > ObjectStore.getNextNotification with LIMIT should use less memory > - > > Key: HIVE-19251 > URL: https://issues.apache.org/jira/browse/HIVE-19251 > Project: Hive > Issue Type: Bug > Components: repl, Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19251.1.patch > > > Experience OOM when Hive metastore try to retrieve huge amount of > notification logs even there's limit clause. Hive shall only retrieve > necessary rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19214) High throughput ingest ORC format
[ https://issues.apache.org/jira/browse/HIVE-19214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445072#comment-16445072 ] Prasanth Jayachandran commented on HIVE-19214: -- i am not sure if sorted dynamic partition optimization works with ACID or not. This will help non-streaming cases as well, helps reduce memory pressure when a single task writes/reads many files. > High throughput ingest ORC format > - > > Key: HIVE-19214 > URL: https://issues.apache.org/jira/browse/HIVE-19214 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Gopal V >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19214.1.patch, HIVE-19214.2.patch > > > Create delta files with all ORC overhead disabled (no index, no compression, > no dictionary). Compactor will recreate the orc files with index, compression > and dictionary encoding. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19226) Extend storage-api to print timestamp values in UTC
[ https://issues.apache.org/jira/browse/HIVE-19226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445047#comment-16445047 ] Ashutosh Chauhan commented on HIVE-19226: - +1 > Extend storage-api to print timestamp values in UTC > --- > > Key: HIVE-19226 > URL: https://issues.apache.org/jira/browse/HIVE-19226 > Project: Hive > Issue Type: Bug > Components: storage-api >Affects Versions: 3.0.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez >Priority: Major > Attachments: HIVE-19226.01.patch, HIVE-19226.patch > > > Related to HIVE-12192. Create new method that prints values in UTC. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-18739) Add support for Import/Export from Acid table
[ https://issues.apache.org/jira/browse/HIVE-18739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445031#comment-16445031 ] Hive QA commented on HIVE-18739: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 9s{color} | {color:red} /data/hiveptest/logs/PreCommit-HIVE-Build-10355/patches/PreCommit-HIVE-Build-10355.patch does not apply to master. Rebase required? Wrong Branch? See http://cwiki.apache.org/confluence/display/Hive/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10355/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Add support for Import/Export from Acid table > - > > Key: HIVE-18739 > URL: https://issues.apache.org/jira/browse/HIVE-18739 > Project: Hive > Issue Type: New Feature > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Fix For: 3.1.0 > > Attachments: HIVE-18739.01-branch-3.patch, HIVE-18739.01.patch, > HIVE-18739.02-branch-3.patch, HIVE-18739.04.patch, HIVE-18739.06.patch, > HIVE-18739.08.patch, HIVE-18739.09.patch, HIVE-18739.10.patch, > HIVE-18739.11.patch, HIVE-18739.12.patch, HIVE-18739.13.patch, > HIVE-18739.14.patch, HIVE-18739.15.patch, HIVE-18739.16.patch, > HIVE-18739.17.patch, HIVE-18739.19.patch, HIVE-18739.20.patch, > HIVE-18739.21.patch, HIVE-18739.23.patch, HIVE-18739.24.patch, > HIVE-18739.25.patch, HIVE-18739.26.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19252) TestJdbcWithMiniKdcCookie.testCookieNegative is failing consistently
[ https://issues.apache.org/jira/browse/HIVE-19252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445030#comment-16445030 ] Ashutosh Chauhan commented on HIVE-19252: - Stacktrace java.lang.AssertionError at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.testCookieNegative(TestJdbcWithMiniKdcCookie.java:112) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > TestJdbcWithMiniKdcCookie.testCookieNegative is failing consistently > > > Key: HIVE-19252 > URL: https://issues.apache.org/jira/browse/HIVE-19252 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Ashutosh Chauhan >Assignee: Thejas M Nair >Priority: Major > > For last 8 builds. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19252) TestJdbcWithMiniKdcCookie.testCookieNegative is failing consistently
[ https://issues.apache.org/jira/browse/HIVE-19252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan reassigned HIVE-19252: --- > TestJdbcWithMiniKdcCookie.testCookieNegative is failing consistently > > > Key: HIVE-19252 > URL: https://issues.apache.org/jira/browse/HIVE-19252 > Project: Hive > Issue Type: Sub-task > Components: Test >Reporter: Ashutosh Chauhan >Assignee: Thejas M Nair >Priority: Major > > For last 8 builds. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16861) MapredParquetOutputFormat - Save Some Array Allocations
[ https://issues.apache.org/jira/browse/HIVE-16861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445025#comment-16445025 ] Hive QA commented on HIVE-16861: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12919838/HIVE-16861.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 14278 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] (batchId=54) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[default_constraint] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=171) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=225) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testRenamePartitionWithCM (batchId=231) org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.testCookieNegative (batchId=254) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10353/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10353/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10353/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 22 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12919838 - PreCommit-HIVE-Build > MapredParquetOutputFormat - Save Some Array Allocations > --- > > Key: HIVE-16861 > URL: https://issues.apache.org/jira/browse/HIVE-16861 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16861.1.patch, HIVE-16861.2.patch, > HIVE-16861.3.patch > > > Remove superfluous array allocations from {{MapredParquetOutputFormat}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19251) ObjectStore.getNextNotification with LIMIT should use less memory
[ https://issues.apache.org/jira/browse/HIVE-19251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16445007#comment-16445007 ] Daniel Dai commented on HIVE-19251: --- That's true we'd better provide batch API for potentially large results. That involves metastore api and rawstore api changes of multiple calls. I'd like to limit the scope of this ticket as this is mainly a specific bug fix and we can open new ticket for batch improvement if needed. > ObjectStore.getNextNotification with LIMIT should use less memory > - > > Key: HIVE-19251 > URL: https://issues.apache.org/jira/browse/HIVE-19251 > Project: Hive > Issue Type: Bug > Components: repl, Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19251.1.patch > > > Experience OOM when Hive metastore try to retrieve huge amount of > notification logs even there's limit clause. Hive shall only retrieve > necessary rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19242) CliAdapter silently ignores excluded qfiles
[ https://issues.apache.org/jira/browse/HIVE-19242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-19242: --- Fix Version/s: 3.1.0 > CliAdapter silently ignores excluded qfiles > --- > > Key: HIVE-19242 > URL: https://issues.apache.org/jira/browse/HIVE-19242 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Trivial > Fix For: 3.1.0 > > Attachments: HIVE-19242.01.patch > > > If a user is trying to run a qfile using {{-Dqfile}} and if it is excluded > according to the {{CliConfig}} AbstractCliConfig silently ignores the qtest > run and its very hard for the user to find out why the test did not run. We > should log a helpful warning so that it is easier to find this out when it > happens. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19242) CliAdapter silently ignores excluded qfiles
[ https://issues.apache.org/jira/browse/HIVE-19242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vihang Karajgaonkar updated HIVE-19242: --- Resolution: Fixed Status: Resolved (was: Patch Available) merged to master. Thanks for the review [~stakiar] > CliAdapter silently ignores excluded qfiles > --- > > Key: HIVE-19242 > URL: https://issues.apache.org/jira/browse/HIVE-19242 > Project: Hive > Issue Type: Improvement >Reporter: Vihang Karajgaonkar >Assignee: Vihang Karajgaonkar >Priority: Trivial > Attachments: HIVE-19242.01.patch > > > If a user is trying to run a qfile using {{-Dqfile}} and if it is excluded > according to the {{CliConfig}} AbstractCliConfig silently ignores the qtest > run and its very hard for the user to find out why the test did not run. We > should log a helpful warning so that it is easier to find this out when it > happens. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HIVE-19251) ObjectStore.getNextNotification with LIMIT should use less memory
[ https://issues.apache.org/jira/browse/HIVE-19251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444973#comment-16444973 ] Vihang Karajgaonkar edited comment on HIVE-19251 at 4/19/18 11:19 PM: -- Do you think it would be more useful to fetch events in batches in case clients are requesting too many events? was (Author: vihangk1): Do you think it would be more useful to do fetch events in batches in case clients are requesting too many events? > ObjectStore.getNextNotification with LIMIT should use less memory > - > > Key: HIVE-19251 > URL: https://issues.apache.org/jira/browse/HIVE-19251 > Project: Hive > Issue Type: Bug > Components: repl, Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19251.1.patch > > > Experience OOM when Hive metastore try to retrieve huge amount of > notification logs even there's limit clause. Hive shall only retrieve > necessary rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19251) ObjectStore.getNextNotification with LIMIT should use less memory
[ https://issues.apache.org/jira/browse/HIVE-19251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444973#comment-16444973 ] Vihang Karajgaonkar commented on HIVE-19251: Do you think it would be more useful to do fetch events in batches in case clients are requesting too many events? > ObjectStore.getNextNotification with LIMIT should use less memory > - > > Key: HIVE-19251 > URL: https://issues.apache.org/jira/browse/HIVE-19251 > Project: Hive > Issue Type: Bug > Components: repl, Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19251.1.patch > > > Experience OOM when Hive metastore try to retrieve huge amount of > notification logs even there's limit clause. Hive shall only retrieve > necessary rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17657) export/import for MM tables is broken
[ https://issues.apache.org/jira/browse/HIVE-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444974#comment-16444974 ] Sergey Shelukhin commented on HIVE-17657: - The test already covers ignoring delta directories that are not needed based on AcidState... deltas can be invalid for different reasons but this would be a test for AcidState :) > export/import for MM tables is broken > - > > Key: HIVE-17657 > URL: https://issues.apache.org/jira/browse/HIVE-17657 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman >Assignee: Sergey Shelukhin >Priority: Major > Labels: mm-gap-2 > Attachments: HIVE-17657.01.patch, HIVE-17657.patch > > > there is mm_exim.q but it's not clear from the tests what file structure it > creates > On import the txnids in the directory names would have to be remapped if > importing to a different cluster. Perhaps export can be smart and export > highest base_x and accretive deltas (minus aborted ones). Then import can > ...? It would have to remap txn ids from the archive to new txn ids. This > would then mean that import is made up of several transactions rather than 1 > atomic op. (all locks must belong to a transaction) > One possibility is to open a new txn for each dir in the archive (where > start/end txn of file name is the same) and commit all of them at once (need > new TMgr API for that). This assumes using a shared lock (if any!) and thus > allows other inserts (not related to import) to occur. > What if you have delta_6_9, such as a result of concatenate? If we stipulate > that this must mean that there is no delta_6_6 or any other "obsolete" delta > in the archive we can map it to a new single txn delta_x_x. > Add read_only mode for tables (useful in general, may be needed for upgrade > etc) and use that to make the above atomic. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19214) High throughput ingest ORC format
[ https://issues.apache.org/jira/browse/HIVE-19214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444969#comment-16444969 ] Eugene Koifman commented on HIVE-19214: --- this makes changes to OrcRecordUpdate w/o checking if it's doing streaming ingest - is that what we want? > High throughput ingest ORC format > - > > Key: HIVE-19214 > URL: https://issues.apache.org/jira/browse/HIVE-19214 > Project: Hive > Issue Type: Sub-task > Components: Streaming >Affects Versions: 3.0.0, 3.1.0 >Reporter: Gopal V >Assignee: Prasanth Jayachandran >Priority: Major > Attachments: HIVE-19214.1.patch, HIVE-19214.2.patch > > > Create delta files with all ORC overhead disabled (no index, no compression, > no dictionary). Compactor will recreate the orc files with index, compression > and dictionary encoding. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19137) orcfiledump doesn't print hive.acid.version value
[ https://issues.apache.org/jira/browse/HIVE-19137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444967#comment-16444967 ] Eugene Koifman commented on HIVE-19137: --- [~ikryvenko], could you also create a patch for branch-3 with this change. We can't check this into 3.1 unless it also goes into 3.0 > orcfiledump doesn't print hive.acid.version value > - > > Key: HIVE-19137 > URL: https://issues.apache.org/jira/browse/HIVE-19137 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Igor Kryvenko >Priority: Major > Attachments: HIVE-19137.01.patch, HIVE-19137.02.patch, > HIVE-19137.03.patch, HIVE-19137.04.patch > > > HIVE-18659 added hive.acid.version in the file footer. > orcfiledump prints something like > {noformat} > User Metadata: > hive.acid.key.index=1,536870912,1; > hive.acid.stats=2,0,0 > hive.acid.version= > {noformat} > probably because > {noformat} > public static void setAcidVersionInDataFile(Writer writer) { > //so that we know which version wrote the file > ByteBuffer bf = ByteBuffer.allocate(4).putInt(ORC_ACID_VERSION); > bf.rewind(); //don't ask - some ByteBuffer weridness. w/o this, empty > buffer is written > writer.addUserMetadata(ACID_VERSION_KEY, bf); > } > {noformat} > use > {{UTF8.encode())}} instead -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Assigned] (HIVE-19198) Few flaky hcatalog tests
[ https://issues.apache.org/jira/browse/HIVE-19198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair reassigned HIVE-19198: Assignee: Daniel Dai (was: Thejas M Nair) > Few flaky hcatalog tests > > > Key: HIVE-19198 > URL: https://issues.apache.org/jira/browse/HIVE-19198 > Project: Hive > Issue Type: Sub-task >Reporter: Ashutosh Chauhan >Assignee: Daniel Dai >Priority: Major > > TestPermsGrp : Consider removing this since hcat cli is not widely used. > TestHCatPartitionPublish.testPartitionPublish > TestHCatMultiOutputFormat.testOutputFormat -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19251) ObjectStore.getNextNotification with LIMIT should use less memory
[ https://issues.apache.org/jira/browse/HIVE-19251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444960#comment-16444960 ] Thejas M Nair commented on HIVE-19251: -- +1 > ObjectStore.getNextNotification with LIMIT should use less memory > - > > Key: HIVE-19251 > URL: https://issues.apache.org/jira/browse/HIVE-19251 > Project: Hive > Issue Type: Bug > Components: repl, Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19251.1.patch > > > Experience OOM when Hive metastore try to retrieve huge amount of > notification logs even there's limit clause. Hive shall only retrieve > necessary rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-16861) MapredParquetOutputFormat - Save Some Array Allocations
[ https://issues.apache.org/jira/browse/HIVE-16861?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444958#comment-16444958 ] Hive QA commented on HIVE-16861: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 58s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 15s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 17m 21s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-10353/dev-support/hive-personality.sh | | git revision | master / 9f15e22 | | Default Java | 1.8.0_111 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-10353/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > MapredParquetOutputFormat - Save Some Array Allocations > --- > > Key: HIVE-16861 > URL: https://issues.apache.org/jira/browse/HIVE-16861 > Project: Hive > Issue Type: Improvement >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-16861.1.patch, HIVE-16861.2.patch, > HIVE-16861.3.patch > > > Remove superfluous array allocations from {{MapredParquetOutputFormat}} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-17657) export/import for MM tables is broken
[ https://issues.apache.org/jira/browse/HIVE-17657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444952#comment-16444952 ] Eugene Koifman commented on HIVE-17657: --- hive.test.rollbacktxn can be used to rollback w/o multi-stmt txns > export/import for MM tables is broken > - > > Key: HIVE-17657 > URL: https://issues.apache.org/jira/browse/HIVE-17657 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Reporter: Eugene Koifman >Assignee: Sergey Shelukhin >Priority: Major > Labels: mm-gap-2 > Attachments: HIVE-17657.01.patch, HIVE-17657.patch > > > there is mm_exim.q but it's not clear from the tests what file structure it > creates > On import the txnids in the directory names would have to be remapped if > importing to a different cluster. Perhaps export can be smart and export > highest base_x and accretive deltas (minus aborted ones). Then import can > ...? It would have to remap txn ids from the archive to new txn ids. This > would then mean that import is made up of several transactions rather than 1 > atomic op. (all locks must belong to a transaction) > One possibility is to open a new txn for each dir in the archive (where > start/end txn of file name is the same) and commit all of them at once (need > new TMgr API for that). This assumes using a shared lock (if any!) and thus > allows other inserts (not related to import) to occur. > What if you have delta_6_9, such as a result of concatenate? If we stipulate > that this must mean that there is no delta_6_6 or any other "obsolete" delta > in the archive we can map it to a new single txn delta_x_x. > Add read_only mode for tables (useful in general, may be needed for upgrade > etc) and use that to make the above atomic. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19077) Handle duplicate ptests requests standing in queue at the same time
[ https://issues.apache.org/jira/browse/HIVE-19077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444937#comment-16444937 ] Deepak Jaiswal commented on HIVE-19077: --- Hi Adam, Thanks for taking this up again. I like your proposal. Can you please provide what kind of tag you are suggesting? If I may, I suggest that we can inspect the size of the patch and decide to skip it if it is large. We can define what can be considered large. IMO, any patch larger than 1MB could be considered one. Feel free to go ahead and make that change if that sounds good. > Handle duplicate ptests requests standing in queue at the same time > --- > > Key: HIVE-19077 > URL: https://issues.apache.org/jira/browse/HIVE-19077 > Project: Hive > Issue Type: Improvement > Components: Testing Infrastructure >Reporter: Adam Szita >Assignee: Adam Szita >Priority: Blocker > Fix For: 3.1.0 > > Attachments: HIVE-19077.0.patch, HIVE-19077.1.patch, > HIVE-19077.sslFix.patch > > > I've been keeping on eye on our {{PreCommit-HIVE-Build}} job, and what I > noticed that sometimes huge queues can build up, that contain jira's more > than once. (Yesterday I've seen a queue of 40, having 31 distinct jiras..) > Simple scenario is that I upload a patch, it gets queued for ptest (already > long queue), and 3 hours later I will update it, re-upload and re-queue. Now > the current ptest infra seems to be smart enough to always deal with the > latest patch, so what will happen is that the same patch will be tested 2 > times (with ~3 hours) diff, most probably with same result. > I propose we do some deduplication - if ptest starts running the request for > Jira X, then it can take a look on the current queue, and see if X is there > again. If so, it can skip for now, it will be picked up later anyway. > In practice this means that if you reconsider your patch and update it, your > original place in the queue will be gone (like as a penalty for changing it), > but overall it saves resources for the whole community. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19251) ObjectStore.getNextNotification with LIMIT should use less memory
[ https://issues.apache.org/jira/browse/HIVE-19251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444933#comment-16444933 ] Daniel Dai commented on HIVE-19251: --- This patch only changes the implementation ObjectStore.getNextNotification, there's no simple way to test it. And limited getNextNotification already covered in TestDbNotificationListener.filterWithMax. So I didn't include a test case. > ObjectStore.getNextNotification with LIMIT should use less memory > - > > Key: HIVE-19251 > URL: https://issues.apache.org/jira/browse/HIVE-19251 > Project: Hive > Issue Type: Bug > Components: repl, Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19251.1.patch > > > Experience OOM when Hive metastore try to retrieve huge amount of > notification logs even there's limit clause. Hive shall only retrieve > necessary rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19124) implement a basic major compactor for MM tables
[ https://issues.apache.org/jira/browse/HIVE-19124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444929#comment-16444929 ] Eugene Koifman commented on HIVE-19124: --- left some RB comments (btw, it seems to have old patch 3) > implement a basic major compactor for MM tables > --- > > Key: HIVE-19124 > URL: https://issues.apache.org/jira/browse/HIVE-19124 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin >Priority: Major > Labels: mm-gap-2 > Attachments: HIVE-19124.01.patch, HIVE-19124.02.patch, > HIVE-19124.03.patch, HIVE-19124.03.patch, HIVE-19124.patch > > > For now, it will run a query directly and only major compactions will be > supported. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19251) ObjectStore.getNextNotification with LIMIT should use less memory
[ https://issues.apache.org/jira/browse/HIVE-19251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-19251: -- Attachment: HIVE-19251.1.patch > ObjectStore.getNextNotification with LIMIT should use less memory > - > > Key: HIVE-19251 > URL: https://issues.apache.org/jira/browse/HIVE-19251 > Project: Hive > Issue Type: Bug > Components: repl, Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > Attachments: HIVE-19251.1.patch > > > Experience OOM when Hive metastore try to retrieve huge amount of > notification logs even there's limit clause. Hive shall only retrieve > necessary rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19219) Incremental REPL DUMP should throw error if requested events are cleaned-up.
[ https://issues.apache.org/jira/browse/HIVE-19219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444922#comment-16444922 ] Hive QA commented on HIVE-19219: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12919835/HIVE-19219.05.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 37 failed/errored test(s), 14279 tests executed *Failed tests:* {noformat} TestMinimrCliDriver - did not produce a TEST-*.xml file (likely timed out) (batchId=93) [infer_bucket_sort_num_buckets.q,infer_bucket_sort_reducers_power_two.q,parallel_orderby.q,bucket_num_reducers_acid.q,infer_bucket_sort_map_operators.q,infer_bucket_sort_merge.q,root_dir_external_table.q,infer_bucket_sort_dyn_part.q,udf_using.q,bucket_num_reducers_acid2.q] TestNonCatCallsWithCatalog - did not produce a TEST-*.xml file (likely timed out) (batchId=217) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[llap_smb] (batchId=92) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[parquet_vectorization_0] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[results_cache_invalidation2] (batchId=39) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[tez_join_hash] (batchId=54) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[results_cache_invalidation2] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[tez_smb_1] (batchId=171) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=183) org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver[explainanalyze_5] (batchId=105) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[cluster_tasklog_retrieval] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[mapreduce_stack_trace_turnoff] (batchId=98) org.apache.hadoop.hive.cli.TestNegativeMinimrCliDriver.testCliDriver[minimr_broken_pipe] (batchId=98) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=225) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.addNoSuchTable[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.createGetDrop2Column[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.createGetDrop[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.createTableWithConstraintsPkInOtherCatalog[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.createTableWithConstraintsPk[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.doubleAddPrimaryKey[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.dropNoSuchCatalog[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.dropNoSuchConstraint[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.dropNoSuchDatabase[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.dropNoSuchTable[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.getNoSuchCatalog[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.getNoSuchDb[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.getNoSuchTable[Embedded] (batchId=211) org.apache.hadoop.hive.metastore.client.TestPrimaryKey.inOtherCatalog[Embedded] (batchId=211) org.apache.hadoop.hive.ql.TestAcidOnTez.testAcidInsertWithRemoveUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testCtasTezUnion (batchId=228) org.apache.hadoop.hive.ql.TestAcidOnTez.testNonStandardConversion01 (batchId=228) org.apache.hadoop.hive.ql.TestMTQueries.testMTQueries1 (batchId=232) org.apache.hadoop.hive.ql.TestTxnCommandsForMmTable.testSnapshotIsolationWithAbortedTxnOnMmTable (batchId=264) org.apache.hive.beeline.TestBeeLineWithArgs.testQueryProgressParallel (batchId=235) org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.testCookieNegative (batchId=254) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/10352/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/10352/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-10352/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 37 tests failed {noforma
[jira] [Commented] (HIVE-19230) Schema column width inconsistency in Oracle
[ https://issues.apache.org/jira/browse/HIVE-19230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444901#comment-16444901 ] Yongzhi Chen commented on HIVE-19230: - LGTM +1 > Schema column width inconsistency in Oracle > > > Key: HIVE-19230 > URL: https://issues.apache.org/jira/browse/HIVE-19230 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.1.0 >Reporter: Naveen Gangam >Assignee: Naveen Gangam >Priority: Minor > Attachments: HIVE-19230.patch > > > This is for oracle only. Does not appear to be an issue with other DBs. When > you upgrade hive schema from 2.1.0 to hive 3.0.0, the width of > TXN_COMPONENTS.TC_TABLE is 256 and COMPLETED_TXN_COMPONENTS.CTC_TABLE is 128. > But if you install hive 3.0 schema directly, their widths are 128 and 256 > respectively. This is consistent with schemas for other databases. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Updated] (HIVE-19251) ObjectStore.getNextNotification with LIMIT should use less memory
[ https://issues.apache.org/jira/browse/HIVE-19251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Dai updated HIVE-19251: -- Attachment: (was: HIVE-19251.1.patch) > ObjectStore.getNextNotification with LIMIT should use less memory > - > > Key: HIVE-19251 > URL: https://issues.apache.org/jira/browse/HIVE-19251 > Project: Hive > Issue Type: Bug > Components: repl, Standalone Metastore >Reporter: Daniel Dai >Assignee: Daniel Dai >Priority: Major > > Experience OOM when Hive metastore try to retrieve huge amount of > notification logs even there's limit clause. Hive shall only retrieve > necessary rows. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HIVE-19233) Add utility for acid 1.0 to 2.0 migration
[ https://issues.apache.org/jira/browse/HIVE-19233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16444871#comment-16444871 ] Eugene Koifman commented on HIVE-19233: --- conf.set(ConfVars.HIVE_QUOTEDID_SUPPORT.varname, "column"); > Add utility for acid 1.0 to 2.0 migration > - > > Key: HIVE-19233 > URL: https://issues.apache.org/jira/browse/HIVE-19233 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman >Priority: Major > Attachments: HIVE-19233.01.patch, HIVE-19233.02.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005)