[jira] [Commented] (HIVE-18031) Support replication for Alter Database operation.
[ https://issues.apache.org/jira/browse/HIVE-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16298001#comment-16298001 ] Hive QA commented on HIVE-18031: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 20s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 15s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 20s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 24s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 21s{color} | {color:red} standalone-metastore: The patch generated 25 new + 804 unchanged - 0 fixed = 829 total (was 804) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 33s{color} | {color:red} ql: The patch generated 7 new + 541 unchanged - 0 fixed = 548 total (was 541) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 22m 38s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 4ec47a6 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8335/yetus/diff-checkstyle-standalone-metastore.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8335/yetus/diff-checkstyle-ql.txt | | modules | C: standalone-metastore ql hcatalog/server-extensions itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8335/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Support replication for Alter Database operation. > - > > Key: HIVE-18031 > URL: https://issues.apache.org/jira/browse/HIVE-18031 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan > Labels: DR, pull-request-available, replication > Fix For: 3.0.0 > > Attachments: HIVE-18031.01.patch > > > Currently alter database operations to alter the database properties or > description are not generating any events due to which it is not getting > replicated. > Need to add an event for this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18148) NPE in SparkDynamicPartitionPruningResolver
[ https://issues.apache.org/jira/browse/HIVE-18148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297994#comment-16297994 ] Rui Li commented on HIVE-18148: --- Both the target table size and the DPP sink output size (smaller output means more partitions are pruned) should be taken into account, if we want to base the decision on statistics. Besides we also need to consider the cost of re-computing, as I mentioned above. Let's put that as follow up. > NPE in SparkDynamicPartitionPruningResolver > --- > > Key: HIVE-18148 > URL: https://issues.apache.org/jira/browse/HIVE-18148 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-18148.1.patch, HIVE-18148.2.patch > > > The stack trace is: > {noformat} > 2017-11-27T10:32:38,752 ERROR [e6c8aab5-ddd2-461d-b185-a7597c3e7519 main] > ql.Driver: FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver$SparkDynamicPartitionPruningDispatcher.dispatch(SparkDynamicPartitionPruningResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:125) > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver.resolve(SparkDynamicPartitionPruningResolver.java:74) > at > org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeTaskPlan(SparkCompiler.java:568) > {noformat} > At this stage, there shouldn't be a DPP sink whose target map work is null. > The root cause seems to be a malformed operator tree generated by > SplitOpTreeForDPP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18317) Improve error messages in TransactionValidationListerner
[ https://issues.apache.org/jira/browse/HIVE-18317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297981#comment-16297981 ] Hive QA commented on HIVE-18317: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12902943/HIVE-18317.02.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 17 failed/errored test(s), 11535 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[create_not_acid] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=209) org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testTransactionalValidation (batchId=214) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) org.apache.hive.jdbc.TestRestrictedList.org.apache.hive.jdbc.TestRestrictedList (batchId=236) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8334/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8334/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8334/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 17 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12902943 - PreCommit-HIVE-Build > Improve error messages in TransactionValidationListerner > > > Key: HIVE-18317 > URL: https://issues.apache.org/jira/browse/HIVE-18317 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18317.01.patch, HIVE-18317.02.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18303) ZooKeeperHiveLockManager should close after lock release in releaseLocksAndCommitOrRollback
[ https://issues.apache.org/jira/browse/HIVE-18303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] J.P Feng updated HIVE-18303: Description: i found exception : ZooKeeperHiveLockManager: Failed to release ZooKeeper lock: java.lang.IllegalStateException: instance must be started before calling this method because in ShutdownHookManager, priority of CuratorFrameworkSingleton.closeAndReleaseInstance is 10, but in Driver.releaseLocksAndCommitOrRollback, it's 0. So locks are released after CuratorFramework is closed, which may case such exception. was: i found exception : ZooKeeperHiveLockManager: Failed to release ZooKeeper lock: java.lang.IllegalStateException: instance must be started before calling this method because in ShutdownHookManager, priority of CuratorFrameworkSingleton.closeAndReleaseInstance is 10, but in Driver.releaseLocksAndCommitOrRollback, it's 0. So locks are released after CuratorFramework is closed, which may case such exception. This is my operation: I add hook of ExecuteWithHookContext or PreExecute, which simply sleep like: @Override public void run(HookContext hookContext) throws Exception { int max_i = 6; try { for ( int i = 0; i < max_i ; i ++) { LOG.info("try to sleep, for i is => " + i); Thread.sleep(5000); } } catch (Exception e) { LOG.error(e.getMessage(), e); } } and run sql for example, select count(1) from hive_test.time_test; Then enter ctrl+c twice to kill hive client. Hive client will not exit, and the above error is thrown. > ZooKeeperHiveLockManager should close after lock release in > releaseLocksAndCommitOrRollback > --- > > Key: HIVE-18303 > URL: https://issues.apache.org/jira/browse/HIVE-18303 > Project: Hive > Issue Type: Bug > Components: Hive >Affects Versions: 2.1.1, 2.2.0 > Environment: jdk 1.7, centos >Reporter: J.P Feng > > i found exception : > ZooKeeperHiveLockManager: Failed to release ZooKeeper lock: > java.lang.IllegalStateException: instance must be started before calling this > method > because in ShutdownHookManager, priority of > CuratorFrameworkSingleton.closeAndReleaseInstance is 10, but in > Driver.releaseLocksAndCommitOrRollback, it's 0. So locks are released after > CuratorFramework is closed, which may case such exception. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18289) Fix jar dependency when enable rdd cache in Hive on Spark
[ https://issues.apache.org/jira/browse/HIVE-18289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297977#comment-16297977 ] liyunzhang commented on HIVE-18289: --- if running in parquet, the exception is {code} Job failed with java.lang.NoSuchMethodException: org.apache.hadoop.io.ArrayWritable.() FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. java.util.concurrent.ExecutionException: Exception thrown by job at org.apache.spark.JavaFutureActionWrapper.getImpl(FutureAction.scala:272) at org.apache.spark.JavaFutureActionWrapper.get(FutureAction.scala:277) at org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:362) at org.apache.hive.spark.client.RemoteDriver$JobWrapper.call(RemoteDriver.java:323) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 4.0 failed 4 times, most recent failure: Lost task 2.3 in stage 4.0 (TID 59, bdpe38): java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.io.ArrayWritable.() at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134) at org.apache.hadoop.io.WritableUtils.clone(WritableUtils.java:217) at org.apache.hadoop.hive.ql.exec.spark.MapInput$CopyFunction.call(MapInput.java:85) at org.apache.hadoop.hive.ql.exec.spark.MapInput$CopyFunction.call(MapInput.java:72) at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1031) at org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1031) at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) at org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:214) at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:919) at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:910) at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:866) at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:910) at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:668) at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:330) at org.apache.spark.rdd.RDD.iterator(RDD.scala:281) at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) at org.apache.spark.scheduler.Task.run(Task.scala:85) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NoSuchMethodException: org.apache.hadoop.io.ArrayWritable.() at java.lang.Class.getConstructor0(Class.java:3082) at java.lang.Class.getDeclaredConstructor(Class.java:2178) at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128) ... 24 more Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1450) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1438) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1437) at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1437) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:811) at scala.Option.foreach(Option.scala:257) at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:811) at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.sca
[jira] [Commented] (HIVE-18317) Improve error messages in TransactionValidationListerner
[ https://issues.apache.org/jira/browse/HIVE-18317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297956#comment-16297956 ] Hive QA commented on HIVE-18317: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 1s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 45s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 15s{color} | {color:red} standalone-metastore: The patch generated 3 new + 26 unchanged - 2 fixed = 29 total (was 28) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 14m 59s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 4ec47a6 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8334/yetus/diff-checkstyle-standalone-metastore.txt | | modules | C: standalone-metastore itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8334/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Improve error messages in TransactionValidationListerner > > > Key: HIVE-18317 > URL: https://issues.apache.org/jira/browse/HIVE-18317 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18317.01.patch, HIVE-18317.02.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18290) hbase backed table creation fails where no column comments present
[ https://issues.apache.org/jira/browse/HIVE-18290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-18290: -- Labels: pull-request-available (was: ) > hbase backed table creation fails where no column comments present > -- > > Key: HIVE-18290 > URL: https://issues.apache.org/jira/browse/HIVE-18290 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Labels: pull-request-available > Fix For: 3.0.0 > > Attachments: HIVE-18290.0.patch > > > Create Hbase Table: > > create 'hbase_avro_table', 'test_col_fam', 'test_col' > Create Hive Table: > = > CREATE EXTERNAL TABLE test_hbase_avro2 > ROW FORMAT SERDE 'org.apache.hadoop.hive.hbase.HBaseSerDe' > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > WITH SERDEPROPERTIES ( > "hbase.columns.mapping" = ":key,test_col_fam:test_col", > "test_col_fam.test_col.serialization.type" = "avro", > "test_col_fam.test_col.avro.schema.url" = > "hdfs://localhost:8020/user/hive/schema.avsc") > TBLPROPERTIES ( > "hbase.table.name" = "hbase_avro_table", > "hbase.mapred.output.outputtable" = "hbase_avro_table", > "hbase.struct.autogenerate"="true", > "avro.schema.literal"='{ > "type": "record", > "name": "test_hbase_avro", > "fields": [ > { "name":"test_col", "type":"string"} > ] > }'); > schema.avsc > === > {code} > { > "type": "record", > "name": "test_hbase_avro", > "fields": [ > { "name":"test_col", "type":"string"} > ] > } > {code} > throws exception > {code} > java.lang.ArrayIndexOutOfBoundsException: 1 > at java.util.Arrays$ArrayList.get(Arrays.java:3841) ~[?:1.8.0_77] > at > org.apache.hadoop.hive.serde2.BaseStructObjectInspector.init(BaseStructObjectInspector.java:106) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.init(LazySimpleStructObjectInspector.java:97) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.(LazySimpleStructObjectInspector.java:77) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.serde2.lazy.objectinspector.LazyObjectInspectorFactory.getLazySimpleStructObjectInspector(LazyObjectInspectorFactory.java:115) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.hbase.HBaseLazyObjectFactory.createLazyHBaseStructInspector(HBaseLazyObjectFactory.java:85) > ~[hive-hbase-handler-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT] > at > org.apache.hadoop.hive.hbase.HBaseSerDe.initialize(HBaseSerDe.java:128) > ~[hive-hbase-handler-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT] > at > org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:54) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:531) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:436) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:423) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:279) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:261) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:639) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:622) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:834) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:870) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4271) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:350) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:199) > [hive-exec-2.1.0.2.6.3.
[jira] [Commented] (HIVE-18290) hbase backed table creation fails where no column comments present
[ https://issues.apache.org/jira/browse/HIVE-18290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297942#comment-16297942 ] ASF GitHub Bot commented on HIVE-18290: --- GitHub user anishek opened a pull request: https://github.com/apache/hive/pull/281 HIVE-18290: hbase backed table creation fails where no column comments present You can merge this pull request into a Git repository by running: $ git pull https://github.com/anishek/hive HIVE-17829 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hive/pull/281.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #281 commit 1a544e7fe577ff4862638e581d0660a3169677d9 Author: Anishek Agarwal Date: 2017-12-18T10:27:27Z HIVE-18290: hbase backed table creation fails where no column comments present > hbase backed table creation fails where no column comments present > -- > > Key: HIVE-18290 > URL: https://issues.apache.org/jira/browse/HIVE-18290 > Project: Hive > Issue Type: Bug > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Labels: pull-request-available > Fix For: 3.0.0 > > Attachments: HIVE-18290.0.patch > > > Create Hbase Table: > > create 'hbase_avro_table', 'test_col_fam', 'test_col' > Create Hive Table: > = > CREATE EXTERNAL TABLE test_hbase_avro2 > ROW FORMAT SERDE 'org.apache.hadoop.hive.hbase.HBaseSerDe' > STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' > WITH SERDEPROPERTIES ( > "hbase.columns.mapping" = ":key,test_col_fam:test_col", > "test_col_fam.test_col.serialization.type" = "avro", > "test_col_fam.test_col.avro.schema.url" = > "hdfs://localhost:8020/user/hive/schema.avsc") > TBLPROPERTIES ( > "hbase.table.name" = "hbase_avro_table", > "hbase.mapred.output.outputtable" = "hbase_avro_table", > "hbase.struct.autogenerate"="true", > "avro.schema.literal"='{ > "type": "record", > "name": "test_hbase_avro", > "fields": [ > { "name":"test_col", "type":"string"} > ] > }'); > schema.avsc > === > {code} > { > "type": "record", > "name": "test_hbase_avro", > "fields": [ > { "name":"test_col", "type":"string"} > ] > } > {code} > throws exception > {code} > java.lang.ArrayIndexOutOfBoundsException: 1 > at java.util.Arrays$ArrayList.get(Arrays.java:3841) ~[?:1.8.0_77] > at > org.apache.hadoop.hive.serde2.BaseStructObjectInspector.init(BaseStructObjectInspector.java:106) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.init(LazySimpleStructObjectInspector.java:97) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.(LazySimpleStructObjectInspector.java:77) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.serde2.lazy.objectinspector.LazyObjectInspectorFactory.getLazySimpleStructObjectInspector(LazyObjectInspectorFactory.java:115) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.hbase.HBaseLazyObjectFactory.createLazyHBaseStructInspector(HBaseLazyObjectFactory.java:85) > ~[hive-hbase-handler-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT] > at > org.apache.hadoop.hive.hbase.HBaseSerDe.initialize(HBaseSerDe.java:128) > ~[hive-hbase-handler-2.1.0.2.6.3.0-SNAPSHOT.jar:2.1.0.2.6.3.0-SNAPSHOT] > at > org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:54) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:531) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:436) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:423) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:279) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:261) > ~[hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2.6.3.0-235] > at > org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:639) > [hive-exec-2.1.0.2.6.3.0-235.jar:2.1.0.2
[jira] [Updated] (HIVE-17829) ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in Hive2
[ https://issues.apache.org/jira/browse/HIVE-17829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] anishek updated HIVE-17829: --- Attachment: HIVE-17829.1.patch including test. > ArrayIndexOutOfBoundsException - HBASE-backed tables with Avro schema in Hive2 > -- > > Key: HIVE-17829 > URL: https://issues.apache.org/jira/browse/HIVE-17829 > Project: Hive > Issue Type: Bug > Components: HBase Handler >Affects Versions: 2.1.0 >Reporter: Chiran Ravani >Assignee: anishek >Priority: Critical > Attachments: HIVE-17829.0.patch, HIVE-17829.1.patch > > > Stack > {code} > 2017-10-09T09:39:54,804 ERROR [HiveServer2-Background-Pool: Thread-95]: > metadata.Table (Table.java:getColsInternal(642)) - Unable to get field from > serde: org.apache.hadoop.hive.hbase.HBaseSerDe > java.lang.ArrayIndexOutOfBoundsException: 1 > at java.util.Arrays$ArrayList.get(Arrays.java:3841) ~[?:1.8.0_77] > at > org.apache.hadoop.hive.serde2.BaseStructObjectInspector.init(BaseStructObjectInspector.java:104) > ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.init(LazySimpleStructObjectInspector.java:97) > ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.serde2.lazy.objectinspector.LazySimpleStructObjectInspector.(LazySimpleStructObjectInspector.java:77) > ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.serde2.lazy.objectinspector.LazyObjectInspectorFactory.getLazySimpleStructObjectInspector(LazyObjectInspectorFactory.java:115) > ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.hbase.HBaseLazyObjectFactory.createLazyHBaseStructInspector(HBaseLazyObjectFactory.java:79) > ~[hive-hbase-handler-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.hbase.HBaseSerDe.initialize(HBaseSerDe.java:127) > ~[hive-hbase-handler-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.serde2.AbstractSerDe.initialize(AbstractSerDe.java:54) > ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.serde2.SerDeUtils.initializeSerDe(SerDeUtils.java:531) > ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:424) > ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.metastore.MetaStoreUtils.getDeserializer(MetaStoreUtils.java:411) > ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.ql.metadata.Table.getDeserializerFromMetaStore(Table.java:279) > ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.ql.metadata.Table.getDeserializer(Table.java:261) > ~[hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.ql.metadata.Table.getColsInternal(Table.java:639) > [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at org.apache.hadoop.hive.ql.metadata.Table.getCols(Table.java:622) > [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:833) > [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at org.apache.hadoop.hive.ql.metadata.Hive.createTable(Hive.java:869) > [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.ql.exec.DDLTask.createTable(DDLTask.java:4228) > [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:347) > [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:197) > [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) > [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1905) > [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1607) > [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1354) > [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1123) > [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1116) > [hive-exec-2.1.0.2.6.2.0-205.jar:2.1.0.2.6.2.0-205] > at > org.apache.hive.service.cli.operation.SQLOperation
[jira] [Commented] (HIVE-18306) Fix spark smb tests
[ https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297935#comment-16297935 ] Hive QA commented on HIVE-18306: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12902933/HIVE-18306.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 11534 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=150) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=156) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=159) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=208) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=225) org.apache.hive.jdbc.TestRestrictedList.org.apache.hive.jdbc.TestRestrictedList (batchId=235) org.apache.hive.jdbc.TestTriggersMoveWorkloadManager.testTriggerMoveAndKill (batchId=235) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8333/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8333/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8333/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 16 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12902933 - PreCommit-HIVE-Build > Fix spark smb tests > --- > > Key: HIVE-18306 > URL: https://issues.apache.org/jira/browse/HIVE-18306 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Deepak Jaiswal > Attachments: HIVE-18306.1.patch, HIVE-18306.2.patch > > > seems to me that > {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and > {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is > failing since HIVE-18208 is in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18148) NPE in SparkDynamicPartitionPruningResolver
[ https://issues.apache.org/jira/browse/HIVE-18148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297921#comment-16297921 ] liyunzhang commented on HIVE-18148: --- it's better to remove the DPP which the statistics of TS in the target Map is smaller in above case. Only suggestion. > NPE in SparkDynamicPartitionPruningResolver > --- > > Key: HIVE-18148 > URL: https://issues.apache.org/jira/browse/HIVE-18148 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-18148.1.patch, HIVE-18148.2.patch > > > The stack trace is: > {noformat} > 2017-11-27T10:32:38,752 ERROR [e6c8aab5-ddd2-461d-b185-a7597c3e7519 main] > ql.Driver: FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver$SparkDynamicPartitionPruningDispatcher.dispatch(SparkDynamicPartitionPruningResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:125) > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver.resolve(SparkDynamicPartitionPruningResolver.java:74) > at > org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeTaskPlan(SparkCompiler.java:568) > {noformat} > At this stage, there shouldn't be a DPP sink whose target map work is null. > The root cause seems to be a malformed operator tree generated by > SplitOpTreeForDPP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18148) NPE in SparkDynamicPartitionPruningResolver
[ https://issues.apache.org/jira/browse/HIVE-18148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297915#comment-16297915 ] Rui Li commented on HIVE-18148: --- [~kellyzly], we can remove either DPP1 or DPP2 to fix the NPE. I keep the upper most DPP sink mainly for simplicity. Another rationale is the deeper the DPP sink, the more operators get re-computed. We can implement more complicated rules based on statistics, which can be done as follow ups. > NPE in SparkDynamicPartitionPruningResolver > --- > > Key: HIVE-18148 > URL: https://issues.apache.org/jira/browse/HIVE-18148 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-18148.1.patch, HIVE-18148.2.patch > > > The stack trace is: > {noformat} > 2017-11-27T10:32:38,752 ERROR [e6c8aab5-ddd2-461d-b185-a7597c3e7519 main] > ql.Driver: FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver$SparkDynamicPartitionPruningDispatcher.dispatch(SparkDynamicPartitionPruningResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:125) > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver.resolve(SparkDynamicPartitionPruningResolver.java:74) > at > org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeTaskPlan(SparkCompiler.java:568) > {noformat} > At this stage, there shouldn't be a DPP sink whose target map work is null. > The root cause seems to be a malformed operator tree generated by > SplitOpTreeForDPP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18306) Fix spark smb tests
[ https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297909#comment-16297909 ] Deepak Jaiswal commented on HIVE-18306: --- Ran all the failing tests and they all passed locally. > Fix spark smb tests > --- > > Key: HIVE-18306 > URL: https://issues.apache.org/jira/browse/HIVE-18306 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Deepak Jaiswal > Attachments: HIVE-18306.1.patch, HIVE-18306.2.patch > > > seems to me that > {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and > {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is > failing since HIVE-18208 is in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18148) NPE in SparkDynamicPartitionPruningResolver
[ https://issues.apache.org/jira/browse/HIVE-18148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297899#comment-16297899 ] liyunzhang commented on HIVE-18148: --- I can understand the reason why you need to remove DPP1 is to fix the NPE. But in above example. If the table in the most left is a big table, it is suitable to remove the DPP1 in common join case? > NPE in SparkDynamicPartitionPruningResolver > --- > > Key: HIVE-18148 > URL: https://issues.apache.org/jira/browse/HIVE-18148 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-18148.1.patch, HIVE-18148.2.patch > > > The stack trace is: > {noformat} > 2017-11-27T10:32:38,752 ERROR [e6c8aab5-ddd2-461d-b185-a7597c3e7519 main] > ql.Driver: FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver$SparkDynamicPartitionPruningDispatcher.dispatch(SparkDynamicPartitionPruningResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:125) > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver.resolve(SparkDynamicPartitionPruningResolver.java:74) > at > org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeTaskPlan(SparkCompiler.java:568) > {noformat} > At this stage, there shouldn't be a DPP sink whose target map work is null. > The root cause seems to be a malformed operator tree generated by > SplitOpTreeForDPP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Comment Edited] (HIVE-18301) Investigate to enable MapInput cache in Hive on Spark
[ https://issues.apache.org/jira/browse/HIVE-18301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297897#comment-16297897 ] liyunzhang edited comment on HIVE-18301 at 12/20/17 5:23 AM: - yes. I can achieve to merge the same tables into 1 mapInput and run successfully such case(DS/query28) in HIVE-17486 in spark local mode. This exception only happens in yarn mode. was (Author: kellyzly): yes. I can achieve the merge the same tables into 1 mapInput and run successfully such case(DS/query28) in HIVE-17486 in spark local mode. This exception only happens in yarn mode. > Investigate to enable MapInput cache in Hive on Spark > - > > Key: HIVE-18301 > URL: https://issues.apache.org/jira/browse/HIVE-18301 > Project: Hive > Issue Type: Bug >Reporter: liyunzhang >Assignee: liyunzhang > > Before IOContext problem is found in MapTran when spark rdd cache is enabled > in HIVE-8920. > so we disabled rdd cache in MapTran at > [SparkPlanGenerator|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java#L202]. > The problem is IOContext seems not initialized correctly in the spark yarn > client/cluster mode and caused the exception like > {code} > Job aborted due to stage failure: Task 93 in stage 0.0 failed 4 times, most > recent failure: Lost task 93.3 in stage 0.0 (TID 616, bdpe48): > java.lang.RuntimeException: Error processing row: > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:165) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27) > at > org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85) > at > scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) > at org.apache.spark.scheduler.Task.run(Task.scala:85) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.AbstractMapOperator.getNominalPath(AbstractMapOperator.java:101) > at > org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:516) > at > org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1187) > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:546) > at > org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:152) > ... 12 more > Driver stacktrace: > {code} > in yarn client/cluster mode, sometimes > [ExecMapperContext#currentInputPath|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecMapperContext.java#L109] > is null when rdd cach is enabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18301) Investigate to enable MapInput cache in Hive on Spark
[ https://issues.apache.org/jira/browse/HIVE-18301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297897#comment-16297897 ] liyunzhang commented on HIVE-18301: --- yes. I can achieve the merge the same tables into 1 mapInput and run successfully such case(DS/query28) in HIVE-17486 in spark local mode. This exception only happens in yarn mode. > Investigate to enable MapInput cache in Hive on Spark > - > > Key: HIVE-18301 > URL: https://issues.apache.org/jira/browse/HIVE-18301 > Project: Hive > Issue Type: Bug >Reporter: liyunzhang >Assignee: liyunzhang > > Before IOContext problem is found in MapTran when spark rdd cache is enabled > in HIVE-8920. > so we disabled rdd cache in MapTran at > [SparkPlanGenerator|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java#L202]. > The problem is IOContext seems not initialized correctly in the spark yarn > client/cluster mode and caused the exception like > {code} > Job aborted due to stage failure: Task 93 in stage 0.0 failed 4 times, most > recent failure: Lost task 93.3 in stage 0.0 (TID 616, bdpe48): > java.lang.RuntimeException: Error processing row: > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:165) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27) > at > org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85) > at > scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) > at org.apache.spark.scheduler.Task.run(Task.scala:85) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.AbstractMapOperator.getNominalPath(AbstractMapOperator.java:101) > at > org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:516) > at > org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1187) > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:546) > at > org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:152) > ... 12 more > Driver stacktrace: > {code} > in yarn client/cluster mode, sometimes > [ExecMapperContext#currentInputPath|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecMapperContext.java#L109] > is null when rdd cach is enabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18031) Support replication for Alter Database operation.
[ https://issues.apache.org/jira/browse/HIVE-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18031: Attachment: HIVE-18031.01.patch > Support replication for Alter Database operation. > - > > Key: HIVE-18031 > URL: https://issues.apache.org/jira/browse/HIVE-18031 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan > Labels: DR, pull-request-available, replication > Fix For: 3.0.0 > > Attachments: HIVE-18031.01.patch > > > Currently alter database operations to alter the database properties or > description are not generating any events due to which it is not getting > replicated. > Need to add an event for this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18031) Support replication for Alter Database operation.
[ https://issues.apache.org/jira/browse/HIVE-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18031: Attachment: (was: HIVE-18031.01.patch) > Support replication for Alter Database operation. > - > > Key: HIVE-18031 > URL: https://issues.apache.org/jira/browse/HIVE-18031 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan > Labels: DR, pull-request-available, replication > Fix For: 3.0.0 > > Attachments: HIVE-18031.01.patch > > > Currently alter database operations to alter the database properties or > description are not generating any events due to which it is not getting > replicated. > Need to add an event for this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18301) Investigate to enable MapInput cache in Hive on Spark
[ https://issues.apache.org/jira/browse/HIVE-18301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297894#comment-16297894 ] Rui Li commented on HIVE-18301: --- If we can cache MapInput, will it be simpler to dynamically identify same MapInputs and cache them, in order to achieve the purpose of HIVE-17486? > Investigate to enable MapInput cache in Hive on Spark > - > > Key: HIVE-18301 > URL: https://issues.apache.org/jira/browse/HIVE-18301 > Project: Hive > Issue Type: Bug >Reporter: liyunzhang >Assignee: liyunzhang > > Before IOContext problem is found in MapTran when spark rdd cache is enabled > in HIVE-8920. > so we disabled rdd cache in MapTran at > [SparkPlanGenerator|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java#L202]. > The problem is IOContext seems not initialized correctly in the spark yarn > client/cluster mode and caused the exception like > {code} > Job aborted due to stage failure: Task 93 in stage 0.0 failed 4 times, most > recent failure: Lost task 93.3 in stage 0.0 (TID 616, bdpe48): > java.lang.RuntimeException: Error processing row: > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:165) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27) > at > org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85) > at > scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) > at org.apache.spark.scheduler.Task.run(Task.scala:85) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.AbstractMapOperator.getNominalPath(AbstractMapOperator.java:101) > at > org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:516) > at > org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1187) > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:546) > at > org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:152) > ... 12 more > Driver stacktrace: > {code} > in yarn client/cluster mode, sometimes > [ExecMapperContext#currentInputPath|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecMapperContext.java#L109] > is null when rdd cach is enabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18289) Fix jar dependency when enable rdd cache in Hive on Spark
[ https://issues.apache.org/jira/browse/HIVE-18289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297893#comment-16297893 ] Rui Li commented on HIVE-18289: --- It seems the reason is OrcStruct doesn't have an empty constructor. [~owen.omalley], any thoughts on this? Thanks. > Fix jar dependency when enable rdd cache in Hive on Spark > - > > Key: HIVE-18289 > URL: https://issues.apache.org/jira/browse/HIVE-18289 > Project: Hive > Issue Type: Bug >Reporter: liyunzhang >Assignee: liyunzhang > > running DS/query28 when enabling HIVE-17486's 4th patch > on tpcds_bin_partitioned_orc_10 whether on spark local or yarn mode > command > {code} > set spark.local=yarn-client; > echo 'use tpcds_bin_partitioned_orc_10;source query28.sql;'|hive --hiveconf > spark.app.name=query28.sql --hiveconf hive.spark.optimize.shared.work=true > -i testbench.settings -i query28.sql.setting > {code} > the exception > {code} > ava.lang.RuntimeException: java.lang.NoSuchMethodException: > org.apache.hadoop.hive.ql.io.orc.OrcStruct.() > 748678 at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134) > ~[hadoop-common-2.7.3.jar:?] > 748679 at > org.apache.hadoop.io.WritableUtils.clone(WritableUtils.java:217) > ~[hadoop-common-2.7.3.jar:?] > 748680 at > org.apache.hadoop.hive.ql.exec.spark.MapInput$CopyFunction.call(MapInput.java:85) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0. 0-SNAPSHOT] > 748681 at > org.apache.hadoop.hive.ql.exec.spark.MapInput$CopyFunction.call(MapInput.java:72) > ~[hive-exec-3.0.0-SNAPSHOT.jar:3.0. 0-SNAPSHOT] > 748682 at > org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1031) > ~[spark-core_2.11-2. 0.0.jar:2.0.0] > 748683 at > org.apache.spark.api.java.JavaPairRDD$$anonfun$pairFunToScalaFun$1.apply(JavaPairRDD.scala:1031) > ~[spark-core_2.11-2. 0.0.jar:2.0.0] > 748684 at scala.collection.Iterator$$anon$11.next(Iterator.scala:409) > ~[scala-library-2.11.8.jar:?] > 748685 at > org.apache.spark.storage.memory.MemoryStore.putIteratorAsValues(MemoryStore.scala:214) > ~[spark-core_2.11-2.0.0.jar:2. 0.0] > 748686 at > org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:919) > ~[spark-core_2.11-2.0.0. jar:2.0.0] > 748687 at > org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:910) > ~[spark-core_2.11-2.0.0. jar:2.0.0] > 748688 at > org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:866) > ~[spark-core_2.11-2.0.0.jar:2.0.0] > 748689 at > org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:910) > ~[spark-core_2.11-2.0.0.jar:2.0.0] > 748690 at > org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:668) > ~[spark-core_2.11-2.0.0.jar:2.0.0] > 748691 at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:330) > ~[spark-core_2.11-2.0.0.jar:2.0.0] > 748692 at org.apache.spark.rdd.RDD.iterator(RDD.scala:281) > ~[spark-core_2.11-2.0.0.jar:2.0.0] > 748693 at > org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) > ~[spark-core_2.11-2.0.0.jar:2.0.0] > 748694 at > org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319) > ~[spark-core_2.11-2.0.0.jar:2.0.0] > 748695 at org.apache.spark.rdd.RDD.iterator(RDD.scala:283) > ~[spark-core_2.11-2.0.0.jar:2.0.0] > 748696 at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) > ~[spark-core_2.11-2 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18306) Fix spark smb tests
[ https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297891#comment-16297891 ] Hive QA commented on HIVE-18306: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 2m 16s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 4ec47a6 | | modules | C: ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8333/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix spark smb tests > --- > > Key: HIVE-18306 > URL: https://issues.apache.org/jira/browse/HIVE-18306 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Deepak Jaiswal > Attachments: HIVE-18306.1.patch, HIVE-18306.2.patch > > > seems to me that > {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and > {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is > failing since HIVE-18208 is in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18306) Fix spark smb tests
[ https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297888#comment-16297888 ] Hive QA commented on HIVE-18306: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12902933/HIVE-18306.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 11534 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=150) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=156) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=159) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=208) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=225) org.apache.hive.jdbc.TestRestrictedList.org.apache.hive.jdbc.TestRestrictedList (batchId=235) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8332/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8332/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8332/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 15 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12902933 - PreCommit-HIVE-Build > Fix spark smb tests > --- > > Key: HIVE-18306 > URL: https://issues.apache.org/jira/browse/HIVE-18306 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Deepak Jaiswal > Attachments: HIVE-18306.1.patch, HIVE-18306.2.patch > > > seems to me that > {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and > {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is > failing since HIVE-18208 is in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18148) NPE in SparkDynamicPartitionPruningResolver
[ https://issues.apache.org/jira/browse/HIVE-18148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297883#comment-16297883 ] Rui Li commented on HIVE-18148: --- bq. If first tranverses JOIN, then remove DPP2. No, it only collects DPP sinks in the downstream tree starting from a branching operator. So if it firstly traverses JOIN, it won't find any nested DPP sinks. > NPE in SparkDynamicPartitionPruningResolver > --- > > Key: HIVE-18148 > URL: https://issues.apache.org/jira/browse/HIVE-18148 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-18148.1.patch, HIVE-18148.2.patch > > > The stack trace is: > {noformat} > 2017-11-27T10:32:38,752 ERROR [e6c8aab5-ddd2-461d-b185-a7597c3e7519 main] > ql.Driver: FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver$SparkDynamicPartitionPruningDispatcher.dispatch(SparkDynamicPartitionPruningResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:125) > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver.resolve(SparkDynamicPartitionPruningResolver.java:74) > at > org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeTaskPlan(SparkCompiler.java:568) > {noformat} > At this stage, there shouldn't be a DPP sink whose target map work is null. > The root cause seems to be a malformed operator tree generated by > SplitOpTreeForDPP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18148) NPE in SparkDynamicPartitionPruningResolver
[ https://issues.apache.org/jira/browse/HIVE-18148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297875#comment-16297875 ] liyunzhang commented on HIVE-18148: --- sorry for reply late. still have 1 question about the code {code} 621 /** For DPP sinks w/ common join, we'll split the tree and what's above the branching 622* operator is computed multiple times. Therefore it may not be good for performance to support 623* nested DPP sinks, i.e. one DPP sink depends on other DPP sinks. 624* The following is an example: 625* 626* TS TS 627* | | 628*... FIL 629*| | \ 630*RS RS SEL 631* \/| 632* TS JOIN GBY 633* | / \ | 634*RSRSSEL DPP2 635* \ / | 636* JOIN GBY 637*| 638* DPP1 639* 640* where DPP1 depends on DPP2. 641* 642* To avoid such case, we'll visit all the branching operators. If a branching operator has any 643* further away DPP branches w/ common join in its sub-tree, such branches will be removed. 644* In the above example, the branch of DPP1 will be removed. 645*/ {code} this function will first collect the branching operators(FIL,JOIN in above example). then remove the nested DPP in the branches. If first traverses FIL, then remove DPP1 , If first tranverses JOIN, then remove DPP2. This function will randomly remove one of nested DPPs. Here I am confused how to judge which dpp need to be removed? If my understanding is not right, tell me. > NPE in SparkDynamicPartitionPruningResolver > --- > > Key: HIVE-18148 > URL: https://issues.apache.org/jira/browse/HIVE-18148 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Rui Li >Assignee: Rui Li > Attachments: HIVE-18148.1.patch, HIVE-18148.2.patch > > > The stack trace is: > {noformat} > 2017-11-27T10:32:38,752 ERROR [e6c8aab5-ddd2-461d-b185-a7597c3e7519 main] > ql.Driver: FAILED: NullPointerException null > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver$SparkDynamicPartitionPruningDispatcher.dispatch(SparkDynamicPartitionPruningResolver.java:100) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.dispatch(TaskGraphWalker.java:111) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.walk(TaskGraphWalker.java:180) > at > org.apache.hadoop.hive.ql.lib.TaskGraphWalker.startWalking(TaskGraphWalker.java:125) > at > org.apache.hadoop.hive.ql.optimizer.physical.SparkDynamicPartitionPruningResolver.resolve(SparkDynamicPartitionPruningResolver.java:74) > at > org.apache.hadoop.hive.ql.parse.spark.SparkCompiler.optimizeTaskPlan(SparkCompiler.java:568) > {noformat} > At this stage, there shouldn't be a DPP sink whose target map work is null. > The root cause seems to be a malformed operator tree generated by > SplitOpTreeForDPP. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18282) Spark tar is downloaded every time for itest
[ https://issues.apache.org/jira/browse/HIVE-18282?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297869#comment-16297869 ] Rui Li commented on HIVE-18282: --- Thanks [~stakiar] for uploading the file. Do you think we can make the code change? So that we don't get similar problem in the future. > Spark tar is downloaded every time for itest > > > Key: HIVE-18282 > URL: https://issues.apache.org/jira/browse/HIVE-18282 > Project: Hive > Issue Type: Test >Reporter: Rui Li > Attachments: HIVE-18282.1.patch > > > Seems we missed the md5 file for spark-2.2.0? > cc [~kellyzly], [~stakiar] -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18268) Hive Prepared Statement when split with double quoted in query fails
[ https://issues.apache.org/jira/browse/HIVE-18268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297860#comment-16297860 ] Choi JaeHwan commented on HIVE-18268: - [~asherman] test fail is not related to this patch. could you review please > Hive Prepared Statement when split with double quoted in query fails > > > Key: HIVE-18268 > URL: https://issues.apache.org/jira/browse/HIVE-18268 > Project: Hive > Issue Type: Bug > Components: JDBC >Affects Versions: 2.3.2 >Reporter: Choi JaeHwan >Assignee: Choi JaeHwan > Fix For: 3.0.0, 2.4.0, 2.3.3 > > Attachments: HIVE-18268.1.patch, HIVE-18268.2.patch, > HIVE-18268.3.patch, HIVE-18268.4.patch, HIVE-18268.patch > > > HIVE-13625, Change sql statement split when odd number of escape characters, > and add parameter counter validation, above > {code:java} > // prev code > StringBuilder newSql = new StringBuilder(parts.get(0)); > for(int i=1;i if(!parameters.containsKey(i)){ > throw new SQLException("Parameter #"+i+" is unset"); > } > newSql.append(parameters.get(i)); > newSql.append(parts.get(i)); > } > // change from HIVE-13625 > int paramLoc = 1; > while (getCharIndexFromSqlByParamLocation(sql, '?', paramLoc) > 0) { > // check the user has set the needs parameters > if (parameters.containsKey(paramLoc)) { > int tt = getCharIndexFromSqlByParamLocation(newSql.toString(), '?', > 1); > newSql.deleteCharAt(tt); > newSql.insert(tt, parameters.get(paramLoc)); > } > paramLoc++; > } > {code} > If the number of split SQL and the number of parameters are not matched, an > SQLException is thrown > Currently, when splitting SQL, there is no processing for double quoted, and > when the token ('?' ) is between double quote, SQL is split. > i think when the token between double quoted is literal, it is correct to not > split. > for example, above the query; > {code:java} > // Some comments here > 1: String query = " select 1 from x where qa="?" " > 2: String query = " SELECT 1 FROM `x` WHERE (trecord LIKE "ALA[d_?]%") > {code} > ? is literal, then query do not split. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18306) Fix spark smb tests
[ https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297857#comment-16297857 ] Hive QA commented on HIVE-18306: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 29s{color} | {color:blue} Maven dependency ordering for branch {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 10s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 2m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 4ec47a6 | | modules | C: ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8332/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Fix spark smb tests > --- > > Key: HIVE-18306 > URL: https://issues.apache.org/jira/browse/HIVE-18306 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Deepak Jaiswal > Attachments: HIVE-18306.1.patch, HIVE-18306.2.patch > > > seems to me that > {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and > {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is > failing since HIVE-18208 is in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18310) Test 'vector_reduce_groupby_duplicate_cols.q' is misspelled in testconfiguration.properties
[ https://issues.apache.org/jira/browse/HIVE-18310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297850#comment-16297850 ] Hive QA commented on HIVE-18310: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12902927/HIVE-18310.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 11536 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mm_conversions] (batchId=75) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestContribCliDriver.testCliDriver[udtf_output_on_close] (batchId=241) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=178) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) org.apache.hive.jdbc.TestRestrictedList.org.apache.hive.jdbc.TestRestrictedList (batchId=236) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8331/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8331/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8331/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 18 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12902927 - PreCommit-HIVE-Build > Test 'vector_reduce_groupby_duplicate_cols.q' is misspelled in > testconfiguration.properties > --- > > Key: HIVE-18310 > URL: https://issues.apache.org/jira/browse/HIVE-18310 > Project: Hive > Issue Type: Bug >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Minor > Attachments: HIVE-18310.1.patch, HIVE-18310.2.patch > > > The new testvector_reduce_groupby_duplicate_cols.q was introduced in > [HIVE-18258] but is misspelled in testconfiguration.properties: > {noformat} > - vector_reduce_grpupby_duplicate_cols.q,\ > + vector_reduce_groupby_duplicate_cols.q,\ > {noformat} > I noticed this because TestDanglingQOuts.checkDanglingQOut failed -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18265) desc formatted/extended or show create table can not fully display the result when field or table comment contains tab character
[ https://issues.apache.org/jira/browse/HIVE-18265?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hui Huang updated HIVE-18265: - Affects Version/s: 1.2.1 > desc formatted/extended or show create table can not fully display the result > when field or table comment contains tab character > > > Key: HIVE-18265 > URL: https://issues.apache.org/jira/browse/HIVE-18265 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 1.2.1, 3.0.0 >Reporter: Hui Huang >Assignee: Hui Huang > Fix For: 3.0.0 > > Attachments: HIVE-18265.1.patch, HIVE-18265.patch > > > Here are some examples: > create table test_comment (id1 string comment 'full_\tname1', id2 string > comment 'full_\tname2', id3 string comment 'full_\tname3') stored as textfile; > When execute `show create table test_comment`, we can see the following > content in the console, > {quote} > createtab_stmt > CREATE TABLE `test_comment`( > `id1` string COMMENT 'full_ > `id2` string COMMENT 'full_ > `id3` string COMMENT 'full_ > ROW FORMAT SERDE > 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' > STORED AS INPUTFORMAT > 'org.apache.hadoop.mapred.TextInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' > LOCATION > 'hdfs://xxx/user/huanghui/warehouse/huanghuitest.db/test_comment' > TBLPROPERTIES ( > 'transient_lastDdlTime'='1513095570') > {quote} > And the output of `desc formatted table ` is a little similar, > {quote} > col_name data_type comment > \# col_name data_type comment > id1 string full_ > id2 string full_ > id3 string full_ > \# Detailed Table Information > (ignore)... > {quote} > When execute `desc extended test_comment`, the problem is more obvious, > {quote} > col_name data_type comment > id1 string full_ > id2 string full_ > id3 string full_ > Detailed Table InformationTable(tableName:test_comment, > dbName:huanghuitest, owner:huanghui, createTime:1513095570, lastAccessTime:0, > retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id1, type:string, > comment:full_name1), FieldSchema(name:id2, type:string, comment:full_ > {quote} > *the rest of the content is lost*. > The content is not really lost, it's just can not display normal. Because > hive store the result in LazyStruct, and LazyStruct use '\t' as field > separator: > {code:java} > // LazyStruct.java#parse() > // Go through all bytes in the byte[] > while (fieldByteEnd <= structByteEnd) { > if (fieldByteEnd == structByteEnd || bytes[fieldByteEnd] == separator) { > // Reached the end of a field? > if (lastColumnTakesRest && fieldId == fields.length - 1) { > fieldByteEnd = structByteEnd; > } > startPosition[fieldId] = fieldByteBegin; > fieldId++; > if (fieldId == fields.length || fieldByteEnd == structByteEnd) { > // All fields have been parsed, or bytes have been parsed. > // We need to set the startPosition of fields.length to ensure we > // can use the same formula to calculate the length of each field. > // For missing fields, their starting positions will all be the > same, > // which will make their lengths to be -1 and uncheckedGetField will > // return these fields as NULLs. > for (int i = fieldId; i <= fields.length; i++) { > startPosition[i] = fieldByteEnd + 1; > } > break; > } > fieldByteBegin = fieldByteEnd + 1; > fieldByteEnd++; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18304) datediff() UDF returns a wrong result when dealing with a (date, string) input
[ https://issues.apache.org/jira/browse/HIVE-18304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297832#comment-16297832 ] Rui Li commented on HIVE-18304: --- I can't reproduce the issue on my side - the two queries return the same result, and my laptop is in UTC+8. Maybe it's fixed by HIVE-15338? [~hengyu.dai], which Hive version are you using? [~xuefuz], the timezone stuff I worked on is about the timestamptz type, so it's not related here. > datediff() UDF returns a wrong result when dealing with a (date, string) input > -- > > Key: HIVE-18304 > URL: https://issues.apache.org/jira/browse/HIVE-18304 > Project: Hive > Issue Type: Bug > Components: UDF >Reporter: Hengyu Dai >Assignee: Hengyu Dai >Priority: Minor > Attachments: 0001.patch > > > for date type argument, datediff() use DateConverter to convert input to a > java Date object, > for example, a '2017-12-18' will get 2017-12-18T00:00:00.000+0800 > for string type argument, datediff() use TextConverter to convert a string to > date, > for '2012-01-01' we will get 2012-01-01T08:00:00.000+0800 > now, datediff() will return a number less than the real date diff > we should use TextConverter to deal with date input too. > reproduce: > {code:java} > select datediff(cast('2017-12-18' as date), '2012-01-01'); --2177 > select datediff('2017-12-18', '2012-01-01'); --2178 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18304) datediff() UDF returns a wrong result when dealing with a (date, string) input
[ https://issues.apache.org/jira/browse/HIVE-18304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297824#comment-16297824 ] Hengyu Dai commented on HIVE-18304: --- SimpleDateFormat.parse(String source) method will convert String type(UTC) to java.util.Date type(use current JVM timezone), this may lead deviations in time when JVM timezone is not UTC, my environment is GMT+8, 8 hours is added comparing to the UTC time. while for a date type argument, the default JVM timezone is used. The patch uploaded treats String type and Date type at the same way to remove the deviations. > datediff() UDF returns a wrong result when dealing with a (date, string) input > -- > > Key: HIVE-18304 > URL: https://issues.apache.org/jira/browse/HIVE-18304 > Project: Hive > Issue Type: Bug > Components: UDF >Reporter: Hengyu Dai >Assignee: Hengyu Dai >Priority: Minor > Attachments: 0001.patch > > > for date type argument, datediff() use DateConverter to convert input to a > java Date object, > for example, a '2017-12-18' will get 2017-12-18T00:00:00.000+0800 > for string type argument, datediff() use TextConverter to convert a string to > date, > for '2012-01-01' we will get 2012-01-01T08:00:00.000+0800 > now, datediff() will return a number less than the real date diff > we should use TextConverter to deal with date input too. > reproduce: > {code:java} > select datediff(cast('2017-12-18' as date), '2012-01-01'); --2177 > select datediff('2017-12-18', '2012-01-01'); --2178 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18310) Test 'vector_reduce_groupby_duplicate_cols.q' is misspelled in testconfiguration.properties
[ https://issues.apache.org/jira/browse/HIVE-18310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297818#comment-16297818 ] Hive QA commented on HIVE-18310: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 9s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 2m 21s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 4ec47a6 | | modules | C: ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8331/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Test 'vector_reduce_groupby_duplicate_cols.q' is misspelled in > testconfiguration.properties > --- > > Key: HIVE-18310 > URL: https://issues.apache.org/jira/browse/HIVE-18310 > Project: Hive > Issue Type: Bug >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Minor > Attachments: HIVE-18310.1.patch, HIVE-18310.2.patch > > > The new testvector_reduce_groupby_duplicate_cols.q was introduced in > [HIVE-18258] but is misspelled in testconfiguration.properties: > {noformat} > - vector_reduce_grpupby_duplicate_cols.q,\ > + vector_reduce_groupby_duplicate_cols.q,\ > {noformat} > I noticed this because TestDanglingQOuts.checkDanglingQOut failed -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18316) HiveEndPoint should only work with full acid tables
[ https://issues.apache.org/jira/browse/HIVE-18316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297815#comment-16297815 ] Hive QA commented on HIVE-18316: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12902928/HIVE-18316.01.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 688 failed/errored test(s), 7030 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key2] (batchId=238) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_custom_key] (batchId=238) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_index] (batchId=238) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_joins] (batchId=238) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_predicate_pushdown] (batchId=238) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_queries] (batchId=238) org.apache.hadoop.hive.cli.TestAccumuloCliDriver.testCliDriver[accumulo_single_sourced_multi_insert] (batchId=238) org.apache.hadoop.hive.cli.TestBeeLineDriver.org.apache.hadoop.hive.cli.TestBeeLineDriver (batchId=246) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[buckets] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_blobstore] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_blobstore_to_hdfs] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ctas_hdfs_to_blobstore] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[having] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_blobstore_to_blobstore] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_empty_into_blobstore] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_dynamic_partitions] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_into_table] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_directory] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions_merge_move] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions_merge_only] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_dynamic_partitions_move_only] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[insert_overwrite_table] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[join2] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[join] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[map_join] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[map_join_on_filter] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[multiple_agg] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[nested_outer_join] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_buckets] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_format_nonpart] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_format_part] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[orc_nonstd_partitions_loc] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_general_queries] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_matchpath] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_orcfile] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_persistence] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_rcfile] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[ptf_seqfile] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_buckets] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_format_nonpart] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_format_part] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[rcfile_nonstd_partitions_loc] (batchId=249) org.apache.hadoop.hive.cli.TestBlobstoreCliDriver.testCliDriver[write_final_output_blobstore
[jira] [Commented] (HIVE-18316) HiveEndPoint should only work with full acid tables
[ https://issues.apache.org/jira/browse/HIVE-18316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297804#comment-16297804 ] Hive QA commented on HIVE-18316: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 27s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 32s{color} | {color:red} ql: The patch generated 2 new + 117 unchanged - 0 fixed = 119 total (was 117) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 10s{color} | {color:red} hcatalog/streaming: The patch generated 1 new + 124 unchanged - 3 fixed = 125 total (was 127) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 15m 55s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 4ec47a6 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8330/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8330/yetus/diff-checkstyle-hcatalog_streaming.txt | | modules | C: ql hcatalog/streaming U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8330/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > HiveEndPoint should only work with full acid tables > --- > > Key: HIVE-18316 > URL: https://issues.apache.org/jira/browse/HIVE-18316 > Project: Hive > Issue Type: Bug > Components: HCatalog, Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18316.01.patch > > > now that we have full acid and 1/4 acid the check needs to be updated to > check for full acid tables -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18306) Fix spark smb tests
[ https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297797#comment-16297797 ] Jason Dere commented on HIVE-18306: --- +1 pending tests. > Fix spark smb tests > --- > > Key: HIVE-18306 > URL: https://issues.apache.org/jira/browse/HIVE-18306 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Deepak Jaiswal > Attachments: HIVE-18306.1.patch, HIVE-18306.2.patch > > > seems to me that > {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and > {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is > failing since HIVE-18208 is in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18294) add switch to make acid table the default
[ https://issues.apache.org/jira/browse/HIVE-18294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18294: -- Description: it would be convenient for testing to have a switch that enables the behavior where all suitable table tables (currently ORC + not sorted) are automatically created with transactional=true, ie. full acid. (was: it would be convenient for testing to have a switch that enables the behavior where all suitable table tables (currently ORC + not sorted) are automatically reacted with transactional=true.) > add switch to make acid table the default > - > > Key: HIVE-18294 > URL: https://issues.apache.org/jira/browse/HIVE-18294 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18294.01.patch, HIVE-18294.03.patch, > HIVE-18294.04.patch > > > it would be convenient for testing to have a switch that enables the behavior > where all suitable table tables (currently ORC + not sorted) are > automatically created with transactional=true, ie. full acid. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18294) add switch to make acid table the default
[ https://issues.apache.org/jira/browse/HIVE-18294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297795#comment-16297795 ] Eugene Koifman commented on HIVE-18294: --- mvn test -Dtest=TestSparkPerfCliDriver -Dqfile=query39.q runs fine locally no related failures [~alangates] could you review please > add switch to make acid table the default > - > > Key: HIVE-18294 > URL: https://issues.apache.org/jira/browse/HIVE-18294 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18294.01.patch, HIVE-18294.03.patch, > HIVE-18294.04.patch > > > it would be convenient for testing to have a switch that enables the behavior > where all suitable table tables (currently ORC + not sorted) are > automatically reacted with transactional=true. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18316) HiveEndPoint should only work with full acid tables
[ https://issues.apache.org/jira/browse/HIVE-18316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297776#comment-16297776 ] Eugene Koifman commented on HIVE-18316: --- [~alangates] could you review please > HiveEndPoint should only work with full acid tables > --- > > Key: HIVE-18316 > URL: https://issues.apache.org/jira/browse/HIVE-18316 > Project: Hive > Issue Type: Bug > Components: HCatalog, Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18316.01.patch > > > now that we have full acid and 1/4 acid the check needs to be updated to > check for full acid tables -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18315) update tests use non-acid tables
[ https://issues.apache.org/jira/browse/HIVE-18315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297774#comment-16297774 ] Hive QA commented on HIVE-18315: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12902921/HIVE-18315.01.patch {color:green}SUCCESS:{color} +1 due to 12 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 18 failed/errored test(s), 11531 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=170) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=209) org.apache.hadoop.hive.ql.TestTxnNoBuckets.testDefault (batchId=278) org.apache.hadoop.hive.ql.TestTxnNoBucketsVectorized.testDefault (batchId=278) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8329/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8329/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8329/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 18 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12902921 - PreCommit-HIVE-Build > update tests use non-acid tables > > > Key: HIVE-18315 > URL: https://issues.apache.org/jira/browse/HIVE-18315 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18315.01.patch > > > some tests like TestTxnLoadData need to create create non-acid table so that > non-acid to acid conversion can be tested so they need explicit > tblproperties('transactional'='false'). > HCat doesn't support acid so the tests need to use non-acid tables -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18190) Consider looking at ORC file schema rather than using _metadata_acid file
[ https://issues.apache.org/jira/browse/HIVE-18190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18190: -- Status: Patch Available (was: Open) > Consider looking at ORC file schema rather than using _metadata_acid file > - > > Key: HIVE-18190 > URL: https://issues.apache.org/jira/browse/HIVE-18190 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18190.01.patch > > > See if it's possible to just look at the schema of the file in base_ or > delta_ to see if it has Acid metadata columns. If not, it's an 'original' > file and needs ROW_IDs generated. > see more discussion at https://reviews.apache.org/r/64131/ -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18190) Consider looking at ORC file schema rather than using _metadata_acid file
[ https://issues.apache.org/jira/browse/HIVE-18190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18190: -- Attachment: HIVE-18190.01.patch > Consider looking at ORC file schema rather than using _metadata_acid file > - > > Key: HIVE-18190 > URL: https://issues.apache.org/jira/browse/HIVE-18190 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18190.01.patch > > > See if it's possible to just look at the schema of the file in base_ or > delta_ to see if it has Acid metadata columns. If not, it's an 'original' > file and needs ROW_IDs generated. > see more discussion at https://reviews.apache.org/r/64131/ -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18248) Clean up parameters
[ https://issues.apache.org/jira/browse/HIVE-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sahil Takiar updated HIVE-18248: Resolution: Fixed Target Version/s: 3.0.0 Status: Resolved (was: Patch Available) Pushed to master. > Clean up parameters > --- > > Key: HIVE-18248 > URL: https://issues.apache.org/jira/browse/HIVE-18248 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > Attachments: HIVE-18248.1.patch, HIVE-18248.2.patch, > HIVE-18248.3.patch > > > Clean up of parameters that need not change at run time. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297752#comment-16297752 ] Sahil Takiar commented on HIVE-17684: - Yeah, I had a feeling this would happen. Upgrading to Hadoop 3.0.0 from Hadoop 3.0.0-beta1 probably needs to be done in a separate JIRA, and may require some work. I've filed HIVE-18319 to do this. [~aihuaxu] I see a lot of failures due to: {code} Caused by: java.io.FileNotFoundException: File /home/hiveptest/.../itests/hive-unit/$%7Btest.tmp.dir%7D/hadoop-tmp/mapred/local/localRunner/hiveptest/jobcache/... {code} Looks like the substitituion for {{test.tmp.dir}} isn't working. However, [~mi...@cloudera.com] all the failures for {{TestSparkCliDriver}} look related. > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how much memory the {{HashMap}} is consuming: > {{MemoryMXBean#getHeapMemoryUsage().getUsed() / > MemoryMXBean#getHeapMemoryUsage().getMax()}} > The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be > inaccurate. The value returned by this method returns all reachable and > unreachable memory on the heap, so there may be a bunch of garbage data, and > the JVM just hasn't taken the time to reclaim it all. This can lead to > intermittent failures of this check even though a simple GC would have > reclaimed enough space for the process to continue working. > We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. > In Hive-on-MR this probably made sense to use because every Hive task was run > in a dedicated container, so a Hive Task could assume it created most of the > data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks > running in a single executor, each doing different things. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18159) Vectorization: Support Map type in MapWork
[ https://issues.apache.org/jira/browse/HIVE-18159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ferdinand Xu updated HIVE-18159: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Push to the master. Thanks [~colin_mjj] for the contribution. > Vectorization: Support Map type in MapWork > -- > > Key: HIVE-18159 > URL: https://issues.apache.org/jira/browse/HIVE-18159 > Project: Hive > Issue Type: Improvement >Reporter: Colin Ma >Assignee: Colin Ma > Fix For: 3.0.0 > > Attachments: HIVE-18159.001.patch, HIVE-18159.002.patch > > > Support Complex Types in vectorization is finished in HIVE-16589, but Map > type is still not support in MapWork. This ticket is target to support it for > MapWork when vectorization is enable. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18315) update tests use non-acid tables
[ https://issues.apache.org/jira/browse/HIVE-18315?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297738#comment-16297738 ] Hive QA commented on HIVE-18315: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 21s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 18m 49s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 14df3b0 | | Default Java | 1.8.0_111 | | modules | C: ql hcatalog/core hcatalog/hcatalog-pig-adapter hcatalog/streaming U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8329/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > update tests use non-acid tables > > > Key: HIVE-18315 > URL: https://issues.apache.org/jira/browse/HIVE-18315 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18315.01.patch > > > some tests like TestTxnLoadData need to create create non-acid table so that > non-acid to acid conversion can be tested so they need explicit > tblproperties('transactional'='false'). > HCat doesn't support acid so the tests need to use non-acid tables -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18294) add switch to make acid table the default
[ https://issues.apache.org/jira/browse/HIVE-18294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297731#comment-16297731 ] Eugene Koifman commented on HIVE-18294: --- auto_join25 has same failure in https://builds.apache.org/job/PreCommit-HIVE-Build/8320/testReport/org.apache.hadoop.hive.cli/TestCliDriver/testCliDriver_auto_join25_/ llap_smb https://builds.apache.org/job/PreCommit-HIVE-Build/8320/testReport/org.apache.hadoop.hive.cli/TestMiniLlapCliDriver/testCliDriver_llap_smb_/ > add switch to make acid table the default > - > > Key: HIVE-18294 > URL: https://issues.apache.org/jira/browse/HIVE-18294 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18294.01.patch, HIVE-18294.03.patch, > HIVE-18294.04.patch > > > it would be convenient for testing to have a switch that enables the behavior > where all suitable table tables (currently ORC + not sorted) are > automatically reacted with transactional=true. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297728#comment-16297728 ] Misha Dmitriev commented on HIVE-17684: --- I've fixed some checkstyle warnings (several others, e.g. about indentation, seem strange, so I'd rather ignore them). I've looked at the test failures and I am not sure how to debug this. Note that 3 previous runs of the same jenkins build have 14..16 test failures each, so I suspect something is already broken here. In my case there are a lot more test failures, but I suspect that most of them are irrelevant given that my change is quite small. I wonder if my update of the hadoop dependency could have such an effect? > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how much memory the {{HashMap}} is consuming: > {{MemoryMXBean#getHeapMemoryUsage().getUsed() / > MemoryMXBean#getHeapMemoryUsage().getMax()}} > The issue is that {{MemoryMXBean#getHeapMemoryUsage().getUsed()}} can be > inaccurate. The value returned by this method returns all reachable and > unreachable memory on the heap, so there may be a bunch of garbage data, and > the JVM just hasn't taken the time to reclaim it all. This can lead to > intermittent failures of this check even though a simple GC would have > reclaimed enough space for the process to continue working. > We should re-think the usage of {{MapJoinMemoryExhaustionHandler}} for HoS. > In Hive-on-MR this probably made sense to use because every Hive task was run > in a dedicated container, so a Hive Task could assume it created most of the > data on the heap. However, in Hive-on-Spark there can be multiple Hive Tasks > running in a single executor, each doing different things. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297718#comment-16297718 ] Hive QA commented on HIVE-17684: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12902915/HIVE-17684.02.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 267 failed/errored test(s), 10832 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestBeeLineDriver.org.apache.hadoop.hive.cli.TestBeeLineDriver (batchId=246) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[acid_mapjoin] (batchId=10) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_1] (batchId=22) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_2] (batchId=83) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[autoColumnStats_9] (batchId=36) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join0] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join10] (batchId=35) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join11] (batchId=9) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join12] (batchId=24) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join13] (batchId=80) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join14] (batchId=14) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join15] (batchId=15) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join17] (batchId=81) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join19_inclause] (batchId=17) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join1] (batchId=77) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join20] (batchId=88) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join21] (batchId=80) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join22] (batchId=56) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join23] (batchId=18) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join24] (batchId=74) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join26] (batchId=13) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join27] (batchId=89) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join29] (batchId=54) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join31] (batchId=44) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join33] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join3] (batchId=81) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join4] (batchId=70) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join5] (batchId=73) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join8] (batchId=85) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join9] (batchId=75) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_stats2] (batchId=86) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_stats] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join_without_localtask] (batchId=1) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_11] (batchId=85) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_12] (batchId=33) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_14] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_1] (batchId=45) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_2] (batchId=48) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_3] (batchId=2) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_4] (batchId=62) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_5] (batchId=87) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_sortmerge_join_7] (batchId=89) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_1] (batchId=66) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_2] (batchId=57) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark1] (batchId=67) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark2] (batchId=3) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark3] (batchId=44) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucket_map_join_spark4] (batchId=1) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[bucketcontext_1] (batchId=32) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriv
[jira] [Updated] (HIVE-18283) Better error message and error code for HoS exceptions
[ https://issues.apache.org/jira/browse/HIVE-18283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HIVE-18283: Resolution: Fixed Fix Version/s: 3.0.0 Status: Resolved (was: Patch Available) Committed to master branch. Thanks [~xuefuz] and [~asherman] for the review! > Better error message and error code for HoS exceptions > -- > > Key: HIVE-18283 > URL: https://issues.apache.org/jira/browse/HIVE-18283 > Project: Hive > Issue Type: Improvement > Components: Spark >Reporter: Chao Sun >Assignee: Chao Sun > Fix For: 3.0.0 > > Attachments: HIVE-18283.0.patch, HIVE-18283.1.patch, > HIVE-18283.2.patch, HIVE-18283.3.patch > > > Right now HoS only use a few error codes. For the majority of the errors, > user will see an error code 1 followed by a lengthy stacktrace. This is not > ideal since: > 1. It is often hard to find the root cause - sometimes it is hidden deeply > inside the stacktrace. > 2. After identifying the root cause, it is not easy to find a fix. Often user > have to copy & paste the error message and google them. > 3. It is not clear whether the error is transient or not, depending on which > user may want to retry the query. > To improve the above, this JIRA propose to assign error code & canonical > error messages for different HoS errors. We can take advantage of the > existing {{ErrorMsg}} class. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18318) LLAP record reader should check interrupt even when not blocking
[ https://issues.apache.org/jira/browse/HIVE-18318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297702#comment-16297702 ] Gopal V commented on HIVE-18318: LGTM - +1 AFAIK, the operator issue was logged before - HIVE-15889 > LLAP record reader should check interrupt even when not blocking > > > Key: HIVE-18318 > URL: https://issues.apache.org/jira/browse/HIVE-18318 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18318.patch > > > Hive operators don't check interrupts, and may not do blocking operations. > LLAP record reader only blocks in IO is slower than processing; so, if IO is > fast enough, it will not ever block (at least not interruptibly, the sync > w/IO on the object does not check interrupts), and thus never catch > interrupts. > So, the task would be impossible to terminate. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-17684) HoS memory issues with MapJoinMemoryExhaustionHandler
[ https://issues.apache.org/jira/browse/HIVE-17684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297697#comment-16297697 ] Hive QA commented on HIVE-17684: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 29s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 8s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 1s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 30s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 36s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 33s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 37s{color} | {color:red} ql: The patch generated 12 new + 73 unchanged - 1 fixed = 85 total (was 74) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 28s{color} | {color:red} root: The patch generated 12 new + 73 unchanged - 1 fixed = 85 total (was 74) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 11s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 52s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc xml compile findbugs checkstyle | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 00212e0 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8328/yetus/diff-checkstyle-ql.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8328/yetus/diff-checkstyle-root.txt | | modules | C: ql . U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8328/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > HoS memory issues with MapJoinMemoryExhaustionHandler > - > > Key: HIVE-17684 > URL: https://issues.apache.org/jira/browse/HIVE-17684 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Misha Dmitriev > Attachments: HIVE-17684.01.patch, HIVE-17684.02.patch > > > We have seen a number of memory issues due the {{HashSinkOperator}} use of > the {{MapJoinMemoryExhaustionHandler}}. This handler is meant to detect > scenarios where the small table is taking too much space in memory, in which > case a {{MapJoinMemoryExhaustionError}} is thrown. > The configs to control this logic are: > {{hive.mapjoin.localtask.max.memory.usage}} (default 0.90) > {{hive.mapjoin.followby.gby.localtask.max.memory.usage}} (default 0.55) > The handler works by using the {{MemoryMXBean}} and uses the following logic > to estimate how muc
[jira] [Comment Edited] (HIVE-18304) datediff() UDF returns a wrong result when dealing with a (date, string) input
[ https://issues.apache.org/jira/browse/HIVE-18304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297688#comment-16297688 ] Xuefu Zhang edited comment on HIVE-18304 at 12/20/17 12:20 AM: --- [~lirui] Can you comment on this issue? Changes here might need to be compatible to the timezone stuff you worked on previously. Thanks. was (Author: xuefuz): [~lirui] Can you comment on this issue? This might be related to the timezone stuff you worked on previously. Thanks. > datediff() UDF returns a wrong result when dealing with a (date, string) input > -- > > Key: HIVE-18304 > URL: https://issues.apache.org/jira/browse/HIVE-18304 > Project: Hive > Issue Type: Bug > Components: UDF >Reporter: Hengyu Dai >Assignee: Hengyu Dai >Priority: Minor > Attachments: 0001.patch > > > for date type argument, datediff() use DateConverter to convert input to a > java Date object, > for example, a '2017-12-18' will get 2017-12-18T00:00:00.000+0800 > for string type argument, datediff() use TextConverter to convert a string to > date, > for '2012-01-01' we will get 2012-01-01T08:00:00.000+0800 > now, datediff() will return a number less than the real date diff > we should use TextConverter to deal with date input too. > reproduce: > {code:java} > select datediff(cast('2017-12-18' as date), '2012-01-01'); --2177 > select datediff('2017-12-18', '2012-01-01'); --2178 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18304) datediff() UDF returns a wrong result when dealing with a (date, string) input
[ https://issues.apache.org/jira/browse/HIVE-18304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297688#comment-16297688 ] Xuefu Zhang commented on HIVE-18304: [~lirui] Can you comment on this issue? This might be related to the timezone stuff you worked on previously. Thanks. > datediff() UDF returns a wrong result when dealing with a (date, string) input > -- > > Key: HIVE-18304 > URL: https://issues.apache.org/jira/browse/HIVE-18304 > Project: Hive > Issue Type: Bug > Components: UDF >Reporter: Hengyu Dai >Assignee: Hengyu Dai >Priority: Minor > Attachments: 0001.patch > > > for date type argument, datediff() use DateConverter to convert input to a > java Date object, > for example, a '2017-12-18' will get 2017-12-18T00:00:00.000+0800 > for string type argument, datediff() use TextConverter to convert a string to > date, > for '2012-01-01' we will get 2012-01-01T08:00:00.000+0800 > now, datediff() will return a number less than the real date diff > we should use TextConverter to deal with date input too. > reproduce: > {code:java} > select datediff(cast('2017-12-18' as date), '2012-01-01'); --2177 > select datediff('2017-12-18', '2012-01-01'); --2178 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18293) Hive is failing to compact tables contained within a folder that is not owned by identity running HiveMetaStore
[ https://issues.apache.org/jira/browse/HIVE-18293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297682#comment-16297682 ] Johannes Alberti commented on HIVE-18293: - Thanks [~ekoifman], patch attached. > Hive is failing to compact tables contained within a folder that is not owned > by identity running HiveMetaStore > --- > > Key: HIVE-18293 > URL: https://issues.apache.org/jira/browse/HIVE-18293 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.1.1 > Environment: Centos6.5/Hadoop2.7.4/Java7 >Reporter: Johannes Alberti >Assignee: Johannes Alberti >Priority: Critical > Attachments: HIVE-18293.patch > > > ACID tables are not getting compacted properly due to an > AccessControlException, this only occurs for tables contained in the > non-default database. > The root cause for the issue is the re-use of an already created > DistributedFileSystem instance within a new DoAs context. I will attach a > patch for the same. > Stack below (anonymized) > {noformat} > compactor.Worker: Caught an exception in the main loop of compactor worker > [[hostname]]-34, org.apache.hadoop.security.AccessControlException: > Permission denied: user=hive, access=EXECUTE, > inode="/hive/non-default.db":nothive:othergroup:drwxrwx--- > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:275) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:215) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211) > at sun.reflect.GeneratedConstructorAccessor83.newInstance(Unknown > Source) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:525) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2110) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301) > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorThread$1.run(CompactorThread.java:172) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767) > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorThread.findUserToRunAs(CompactorThread.java:169) > at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:151) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): > Permission denied: user=
[jira] [Updated] (HIVE-18293) Hive is failing to compact tables contained within a folder that is not owned by identity running HiveMetaStore
[ https://issues.apache.org/jira/browse/HIVE-18293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Johannes Alberti updated HIVE-18293: Attachment: HIVE-18293.patch > Hive is failing to compact tables contained within a folder that is not owned > by identity running HiveMetaStore > --- > > Key: HIVE-18293 > URL: https://issues.apache.org/jira/browse/HIVE-18293 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.1.1 > Environment: Centos6.5/Hadoop2.7.4/Java7 >Reporter: Johannes Alberti >Assignee: Johannes Alberti >Priority: Critical > Attachments: HIVE-18293.patch > > > ACID tables are not getting compacted properly due to an > AccessControlException, this only occurs for tables contained in the > non-default database. > The root cause for the issue is the re-use of an already created > DistributedFileSystem instance within a new DoAs context. I will attach a > patch for the same. > Stack below (anonymized) > {noformat} > compactor.Worker: Caught an exception in the main loop of compactor worker > [[hostname]]-34, org.apache.hadoop.security.AccessControlException: > Permission denied: user=hive, access=EXECUTE, > inode="/hive/non-default.db":nothive:othergroup:drwxrwx--- > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:275) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:215) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211) > at sun.reflect.GeneratedConstructorAccessor83.newInstance(Unknown > Source) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:525) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2110) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301) > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorThread$1.run(CompactorThread.java:172) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767) > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorThread.findUserToRunAs(CompactorThread.java:169) > at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:151) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): > Permission denied: user=hive, access=EXECUTE, > inode="/hive/non-default.db":nothive
[jira] [Comment Edited] (HIVE-18304) datediff() UDF returns a wrong result when dealing with a (date, string) input
[ https://issues.apache.org/jira/browse/HIVE-18304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297676#comment-16297676 ] Gopal V edited comment on HIVE-18304 at 12/20/17 12:14 AM: --- [~hengyu.dai]: please post the timezone(s) in which this gives an error? The reason Timestamp and Date disagrees is often due to timestamp adjustments. {code} for '2012-01-01' we will get 2012-01-01T08:00:00.000+0800 {code} Looks like it does +8 for one? was (Author: gopalv): [~hengyu.dai]: please post the timezone in which this gives an error? The reason Timestamp and Date disagrees is often due to timestamp adjustments. > datediff() UDF returns a wrong result when dealing with a (date, string) input > -- > > Key: HIVE-18304 > URL: https://issues.apache.org/jira/browse/HIVE-18304 > Project: Hive > Issue Type: Bug > Components: UDF >Reporter: Hengyu Dai >Assignee: Hengyu Dai >Priority: Minor > Attachments: 0001.patch > > > for date type argument, datediff() use DateConverter to convert input to a > java Date object, > for example, a '2017-12-18' will get 2017-12-18T00:00:00.000+0800 > for string type argument, datediff() use TextConverter to convert a string to > date, > for '2012-01-01' we will get 2012-01-01T08:00:00.000+0800 > now, datediff() will return a number less than the real date diff > we should use TextConverter to deal with date input too. > reproduce: > {code:java} > select datediff(cast('2017-12-18' as date), '2012-01-01'); --2177 > select datediff('2017-12-18', '2012-01-01'); --2178 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18304) datediff() UDF returns a wrong result when dealing with a (date, string) input
[ https://issues.apache.org/jira/browse/HIVE-18304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297676#comment-16297676 ] Gopal V commented on HIVE-18304: [~hengyu.dai]: please post the timezone in which this gives an error? The reason Timestamp and Date disagrees is often due to timestamp adjustments. > datediff() UDF returns a wrong result when dealing with a (date, string) input > -- > > Key: HIVE-18304 > URL: https://issues.apache.org/jira/browse/HIVE-18304 > Project: Hive > Issue Type: Bug > Components: UDF >Reporter: Hengyu Dai >Assignee: Hengyu Dai >Priority: Minor > Attachments: 0001.patch > > > for date type argument, datediff() use DateConverter to convert input to a > java Date object, > for example, a '2017-12-18' will get 2017-12-18T00:00:00.000+0800 > for string type argument, datediff() use TextConverter to convert a string to > date, > for '2012-01-01' we will get 2012-01-01T08:00:00.000+0800 > now, datediff() will return a number less than the real date diff > we should use TextConverter to deal with date input too. > reproduce: > {code:java} > select datediff(cast('2017-12-18' as date), '2012-01-01'); --2177 > select datediff('2017-12-18', '2012-01-01'); --2178 > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18293) Hive is failing to compact tables contained within a folder that is not owned by identity running HiveMetaStore
[ https://issues.apache.org/jira/browse/HIVE-18293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Johannes Alberti updated HIVE-18293: Attachment: (was: 0001-create-a-new-FileSystem-instance-to-fix-ugi-context.patch) > Hive is failing to compact tables contained within a folder that is not owned > by identity running HiveMetaStore > --- > > Key: HIVE-18293 > URL: https://issues.apache.org/jira/browse/HIVE-18293 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.1.1 > Environment: Centos6.5/Hadoop2.7.4/Java7 >Reporter: Johannes Alberti >Assignee: Johannes Alberti >Priority: Critical > > ACID tables are not getting compacted properly due to an > AccessControlException, this only occurs for tables contained in the > non-default database. > The root cause for the issue is the re-use of an already created > DistributedFileSystem instance within a new DoAs context. I will attach a > patch for the same. > Stack below (anonymized) > {noformat} > compactor.Worker: Caught an exception in the main loop of compactor worker > [[hostname]]-34, org.apache.hadoop.security.AccessControlException: > Permission denied: user=hive, access=EXECUTE, > inode="/hive/non-default.db":nothive:othergroup:drwxrwx--- > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:275) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:215) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211) > at sun.reflect.GeneratedConstructorAccessor83.newInstance(Unknown > Source) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:525) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2110) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301) > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorThread$1.run(CompactorThread.java:172) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767) > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorThread.findUserToRunAs(CompactorThread.java:169) > at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:151) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): > Permission denied: user=hive, access=EXECUTE, > inode="/hive/non-def
[jira] [Updated] (HIVE-18293) Hive is failing to compact tables contained within a folder that is not owned by identity running HiveMetaStore
[ https://issues.apache.org/jira/browse/HIVE-18293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Johannes Alberti updated HIVE-18293: Attachment: 0001-create-a-new-FileSystem-instance-to-fix-ugi-context.patch > Hive is failing to compact tables contained within a folder that is not owned > by identity running HiveMetaStore > --- > > Key: HIVE-18293 > URL: https://issues.apache.org/jira/browse/HIVE-18293 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.1.1 > Environment: Centos6.5/Hadoop2.7.4/Java7 >Reporter: Johannes Alberti >Assignee: Johannes Alberti >Priority: Critical > Attachments: > 0001-create-a-new-FileSystem-instance-to-fix-ugi-context.patch > > > ACID tables are not getting compacted properly due to an > AccessControlException, this only occurs for tables contained in the > non-default database. > The root cause for the issue is the re-use of an already created > DistributedFileSystem instance within a new DoAs context. I will attach a > patch for the same. > Stack below (anonymized) > {noformat} > compactor.Worker: Caught an exception in the main loop of compactor worker > [[hostname]]-34, org.apache.hadoop.security.AccessControlException: > Permission denied: user=hive, access=EXECUTE, > inode="/hive/non-default.db":nothive:othergroup:drwxrwx--- > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:275) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:215) > at > org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199) > at > org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752) > at > org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2211) > at sun.reflect.GeneratedConstructorAccessor83.newInstance(Unknown > Source) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:525) > at > org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106) > at > org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2110) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305) > at > org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1301) > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorThread$1.run(CompactorThread.java:172) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1767) > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorThread.findUserToRunAs(CompactorThread.java:169) > at org.apache.hadoop.hive.ql.txn.compactor.Worker.run(Worker.java:151) > Caused by: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlExcep
[jira] [Commented] (HIVE-18294) add switch to make acid table the default
[ https://issues.apache.org/jira/browse/HIVE-18294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297641#comment-16297641 ] Hive QA commented on HIVE-18294: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12902910/HIVE-18294.04.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 16 failed/errored test(s), 11528 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver[llap_smb] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.TestSparkPerfCliDriver.testCliDriver[query39] (batchId=248) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=209) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8327/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8327/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8327/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 16 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12902910 - PreCommit-HIVE-Build > add switch to make acid table the default > - > > Key: HIVE-18294 > URL: https://issues.apache.org/jira/browse/HIVE-18294 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18294.01.patch, HIVE-18294.03.patch, > HIVE-18294.04.patch > > > it would be convenient for testing to have a switch that enables the behavior > where all suitable table tables (currently ORC + not sorted) are > automatically reacted with transactional=true. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18317) Improve error messages in TransactionValidationListerner
[ https://issues.apache.org/jira/browse/HIVE-18317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297565#comment-16297565 ] Jason Dere commented on HIVE-18317: --- +1 > Improve error messages in TransactionValidationListerner > > > Key: HIVE-18317 > URL: https://issues.apache.org/jira/browse/HIVE-18317 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18317.01.patch, HIVE-18317.02.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18294) add switch to make acid table the default
[ https://issues.apache.org/jira/browse/HIVE-18294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297551#comment-16297551 ] Hive QA commented on HIVE-18294: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 50s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 51s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 12s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 16s{color} | {color:red} standalone-metastore: The patch generated 1 new + 209 unchanged - 0 fixed = 210 total (was 209) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 39s{color} | {color:red} ql: The patch generated 2 new + 1089 unchanged - 0 fixed = 1091 total (was 1089) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 20m 26s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 00212e0 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8327/yetus/diff-checkstyle-standalone-metastore.txt | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8327/yetus/diff-checkstyle-ql.txt | | modules | C: common standalone-metastore ql U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8327/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > add switch to make acid table the default > - > > Key: HIVE-18294 > URL: https://issues.apache.org/jira/browse/HIVE-18294 > Project: Hive > Issue Type: Sub-task > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18294.01.patch, HIVE-18294.03.patch, > HIVE-18294.04.patch > > > it would be convenient for testing to have a switch that enables the behavior > where all suitable table tables (currently ORC + not sorted) are > automatically reacted with transactional=true. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18221) test acid default
[ https://issues.apache.org/jira/browse/HIVE-18221?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297543#comment-16297543 ] Eugene Koifman commented on HIVE-18221: --- DbTxnManager {noformat} @Override void setHiveConf(HiveConf conf) { super.setHiveConf(conf); if (!conf.getBoolVar(HiveConf.ConfVars.HIVE_SUPPORT_CONCURRENCY)) { //todo: hack for now - many (esp hcat) tests explicitly set concurrency to false so then //since DbTxnManager is now default, this throws... //throw new RuntimeException(ErrorMsg.DBTXNMGR_REQUIRES_CONCURRENCY.getMsg()); } } {noformat} > test acid default > - > > Key: HIVE-18221 > URL: https://issues.apache.org/jira/browse/HIVE-18221 > Project: Hive > Issue Type: Test > Components: Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18221.01.patch, HIVE-18221.02.patch, > HIVE-18221.03.patch, HIVE-18221.04.patch, HIVE-18221.07.patch, > HIVE-18221.08.patch, HIVE-18221.09.patch, HIVE-18221.10.patch, > HIVE-18221.11.patch, HIVE-18221.12.patch, HIVE-18221.13.patch, > HIVE-18221.14.patch, HIVE-18221.16.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18317) Improve error messages in TransactionValidationListerner
[ https://issues.apache.org/jira/browse/HIVE-18317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18317: -- Attachment: HIVE-18317.02.patch > Improve error messages in TransactionValidationListerner > > > Key: HIVE-18317 > URL: https://issues.apache.org/jira/browse/HIVE-18317 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18317.01.patch, HIVE-18317.02.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18317) Improve error messages in TransactionValidationListerner
[ https://issues.apache.org/jira/browse/HIVE-18317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18317: -- Status: Patch Available (was: Open) > Improve error messages in TransactionValidationListerner > > > Key: HIVE-18317 > URL: https://issues.apache.org/jira/browse/HIVE-18317 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18317.01.patch, HIVE-18317.02.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18317) Improve error messages in TransactionValidationListerner
[ https://issues.apache.org/jira/browse/HIVE-18317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297540#comment-16297540 ] Eugene Koifman commented on HIVE-18317: --- [~jdere] could you review please > Improve error messages in TransactionValidationListerner > > > Key: HIVE-18317 > URL: https://issues.apache.org/jira/browse/HIVE-18317 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18317.01.patch, HIVE-18317.02.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18317) Improve error messages in TransactionValidationListerner
[ https://issues.apache.org/jira/browse/HIVE-18317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18317: -- Status: Open (was: Patch Available) > Improve error messages in TransactionValidationListerner > > > Key: HIVE-18317 > URL: https://issues.apache.org/jira/browse/HIVE-18317 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18317.01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18311) Enable smb_mapjoin_8.q for cli driver
[ https://issues.apache.org/jira/browse/HIVE-18311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297532#comment-16297532 ] Hive QA commented on HIVE-18311: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12902902/HIVE-18311.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 11529 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=178) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=209) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8326/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8326/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8326/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 14 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12902902 - PreCommit-HIVE-Build > Enable smb_mapjoin_8.q for cli driver > - > > Key: HIVE-18311 > URL: https://issues.apache.org/jira/browse/HIVE-18311 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > Attachments: HIVE-18311.1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18031) Support replication for Alter Database operation.
[ https://issues.apache.org/jira/browse/HIVE-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18031: Status: Patch Available (was: Open) > Support replication for Alter Database operation. > - > > Key: HIVE-18031 > URL: https://issues.apache.org/jira/browse/HIVE-18031 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan > Labels: DR, pull-request-available, replication > Fix For: 3.0.0 > > Attachments: HIVE-18031.01.patch > > > Currently alter database operations to alter the database properties or > description are not generating any events due to which it is not getting > replicated. > Need to add an event for this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18031) Support replication for Alter Database operation.
[ https://issues.apache.org/jira/browse/HIVE-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297515#comment-16297515 ] ASF GitHub Bot commented on HIVE-18031: --- GitHub user sankarh opened a pull request: https://github.com/apache/hive/pull/280 HIVE-18031: Support replication for Alter Database operation You can merge this pull request into a Git repository by running: $ git pull https://github.com/sankarh/hive HIVE-18031 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hive/pull/280.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #280 commit d6d0902047474a03e922a2897f70c2d22001ba12 Author: Sankar Hariappan Date: 2017-11-22T09:03:35Z HIVE-18031: Support replication for Alter Database operation > Support replication for Alter Database operation. > - > > Key: HIVE-18031 > URL: https://issues.apache.org/jira/browse/HIVE-18031 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan > Labels: DR, pull-request-available, replication > Fix For: 3.0.0 > > Attachments: HIVE-18031.01.patch > > > Currently alter database operations to alter the database properties or > description are not generating any events due to which it is not getting > replicated. > Need to add an event for this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18031) Support replication for Alter Database operation.
[ https://issues.apache.org/jira/browse/HIVE-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-18031: -- Labels: DR pull-request-available replication (was: DR replication) > Support replication for Alter Database operation. > - > > Key: HIVE-18031 > URL: https://issues.apache.org/jira/browse/HIVE-18031 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan > Labels: DR, pull-request-available, replication > Fix For: 3.0.0 > > Attachments: HIVE-18031.01.patch > > > Currently alter database operations to alter the database properties or > description are not generating any events due to which it is not getting > replicated. > Need to add an event for this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18031) Support replication for Alter Database operation.
[ https://issues.apache.org/jira/browse/HIVE-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sankar Hariappan updated HIVE-18031: Attachment: HIVE-18031.01.patch Attached 01.patch - Support ALTER DATABASE event for replicating DB properties and owner info (USER|ROLE). - SET LOCATION is not replicated but it still generate event which is noop at target. - Changed bootstrap load to replicate owner info as well from source. > Support replication for Alter Database operation. > - > > Key: HIVE-18031 > URL: https://issues.apache.org/jira/browse/HIVE-18031 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan > Labels: DR, replication > Fix For: 3.0.0 > > Attachments: HIVE-18031.01.patch > > > Currently alter database operations to alter the database properties or > description are not generating any events due to which it is not getting > replicated. > Need to add an event for this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Work stopped] (HIVE-18031) Support replication for Alter Database operation.
[ https://issues.apache.org/jira/browse/HIVE-18031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-18031 stopped by Sankar Hariappan. --- > Support replication for Alter Database operation. > - > > Key: HIVE-18031 > URL: https://issues.apache.org/jira/browse/HIVE-18031 > Project: Hive > Issue Type: Sub-task > Components: HiveServer2, repl >Affects Versions: 3.0.0 >Reporter: Sankar Hariappan >Assignee: Sankar Hariappan > Labels: DR, replication > Fix For: 3.0.0 > > > Currently alter database operations to alter the database properties or > description are not generating any events due to which it is not getting > replicated. > Need to add an event for this. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18318) LLAP record reader should check interrupt even when not blocking
[ https://issues.apache.org/jira/browse/HIVE-18318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18318: Attachment: HIVE-18318.patch > LLAP record reader should check interrupt even when not blocking > > > Key: HIVE-18318 > URL: https://issues.apache.org/jira/browse/HIVE-18318 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18318.patch > > > Hive operators don't check interrupts, and may not do blocking operations. > LLAP record reader only blocks in IO is slower than processing; so, if IO is > fast enough, it will not ever block (at least not interruptibly, the sync > w/IO on the object does not check interrupts), and thus never catch > interrupts. > So, the task would be impossible to terminate. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18318) LLAP record reader should check interrupt even when not blocking
[ https://issues.apache.org/jira/browse/HIVE-18318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18318: Attachment: (was: HIVE-18318.patch) > LLAP record reader should check interrupt even when not blocking > > > Key: HIVE-18318 > URL: https://issues.apache.org/jira/browse/HIVE-18318 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18318.patch > > > Hive operators don't check interrupts, and may not do blocking operations. > LLAP record reader only blocks in IO is slower than processing; so, if IO is > fast enough, it will not ever block (at least not interruptibly, the sync > w/IO on the object does not check interrupts), and thus never catch > interrupts. > So, the task would be impossible to terminate. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18318) LLAP record reader should check interrupt even when not blocking
[ https://issues.apache.org/jira/browse/HIVE-18318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18318: Status: Patch Available (was: Open) > LLAP record reader should check interrupt even when not blocking > > > Key: HIVE-18318 > URL: https://issues.apache.org/jira/browse/HIVE-18318 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18318.patch > > > Hive operators don't check interrupts, and may not do blocking operations. > LLAP record reader only blocks in IO is slower than processing; so, if IO is > fast enough, it will not ever block (at least not interruptibly, the sync > w/IO on the object does not check interrupts), and thus never catch > interrupts. > So, the task would be impossible to terminate. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18318) LLAP record reader should check interrupt even when not blocking
[ https://issues.apache.org/jira/browse/HIVE-18318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18318: Attachment: HIVE-18318.patch [~gopalv] [~rajesh.balamohan] can you please take a look? a small patch. I'm also adding some minor logging for an unrelated issue. > LLAP record reader should check interrupt even when not blocking > > > Key: HIVE-18318 > URL: https://issues.apache.org/jira/browse/HIVE-18318 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-18318.patch > > > Hive operators don't check interrupts, and may not do blocking operations. > LLAP record reader only blocks in IO is slower than processing; so, if IO is > fast enough, it will not ever block (at least not interruptibly, the sync > w/IO on the object does not check interrupts), and thus never catch > interrupts. > So, the task would be impossible to terminate. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18318) LLAP record reader should check interrupt even when not blocking
[ https://issues.apache.org/jira/browse/HIVE-18318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin updated HIVE-18318: Summary: LLAP record reader should check interrupt even when not blocking (was: LLAP record reader should check interrupt) > LLAP record reader should check interrupt even when not blocking > > > Key: HIVE-18318 > URL: https://issues.apache.org/jira/browse/HIVE-18318 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > Hive operators don't check interrupts, and may not do blocking operations. > LLAP record reader only blocks in IO is slower than processing; so, if IO is > fast enough, it will not ever block (at least not interruptibly, the sync > w/IO on the object does not check interrupts), and thus never catch > interrupts. > So, the task would be impossible to terminate. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18318) LLAP record reader should check interrupt
[ https://issues.apache.org/jira/browse/HIVE-18318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergey Shelukhin reassigned HIVE-18318: --- > LLAP record reader should check interrupt > - > > Key: HIVE-18318 > URL: https://issues.apache.org/jira/browse/HIVE-18318 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > > Hive operators don't check interrupts, and may not do blocking operations. > LLAP record reader only blocks in IO is slower than processing; so, if IO is > fast enough, it will not ever block (at least not interruptibly, the sync > w/IO on the object does not check interrupts), and thus never catch > interrupts. > So, the task would be impossible to terminate. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18317) Improve error messages in TransactionValidationListerner
[ https://issues.apache.org/jira/browse/HIVE-18317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18317: -- Status: Patch Available (was: Open) > Improve error messages in TransactionValidationListerner > > > Key: HIVE-18317 > URL: https://issues.apache.org/jira/browse/HIVE-18317 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18317.01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18317) Improve error messages in TransactionValidationListerner
[ https://issues.apache.org/jira/browse/HIVE-18317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18317: -- Attachment: HIVE-18317.01.patch > Improve error messages in TransactionValidationListerner > > > Key: HIVE-18317 > URL: https://issues.apache.org/jira/browse/HIVE-18317 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18317.01.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18317) Improve error messages in TransactionValidationListerner
[ https://issues.apache.org/jira/browse/HIVE-18317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman reassigned HIVE-18317: - > Improve error messages in TransactionValidationListerner > > > Key: HIVE-18317 > URL: https://issues.apache.org/jira/browse/HIVE-18317 > Project: Hive > Issue Type: Sub-task > Components: Metastore, Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18306) Fix spark smb tests
[ https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18306: -- Attachment: HIVE-18306.2.patch Forgot to update patch to remove spark result file. > Fix spark smb tests > --- > > Key: HIVE-18306 > URL: https://issues.apache.org/jira/browse/HIVE-18306 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Deepak Jaiswal > Attachments: HIVE-18306.1.patch, HIVE-18306.2.patch > > > seems to me that > {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and > {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is > failing since HIVE-18208 is in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18265) desc formatted/extended or show create table can not fully display the result when field or table comment contains tab character
[ https://issues.apache.org/jira/browse/HIVE-18265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297441#comment-16297441 ] Andrew Sherman commented on HIVE-18265: --- Change looks good to me. I am not a Hive committer. Are there any committers who can take a look at this? > desc formatted/extended or show create table can not fully display the result > when field or table comment contains tab character > > > Key: HIVE-18265 > URL: https://issues.apache.org/jira/browse/HIVE-18265 > Project: Hive > Issue Type: Bug > Components: CLI >Affects Versions: 3.0.0 >Reporter: Hui Huang >Assignee: Hui Huang > Fix For: 3.0.0 > > Attachments: HIVE-18265.1.patch, HIVE-18265.patch > > > Here are some examples: > create table test_comment (id1 string comment 'full_\tname1', id2 string > comment 'full_\tname2', id3 string comment 'full_\tname3') stored as textfile; > When execute `show create table test_comment`, we can see the following > content in the console, > {quote} > createtab_stmt > CREATE TABLE `test_comment`( > `id1` string COMMENT 'full_ > `id2` string COMMENT 'full_ > `id3` string COMMENT 'full_ > ROW FORMAT SERDE > 'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe' > STORED AS INPUTFORMAT > 'org.apache.hadoop.mapred.TextInputFormat' > OUTPUTFORMAT > 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat' > LOCATION > 'hdfs://xxx/user/huanghui/warehouse/huanghuitest.db/test_comment' > TBLPROPERTIES ( > 'transient_lastDdlTime'='1513095570') > {quote} > And the output of `desc formatted table ` is a little similar, > {quote} > col_name data_type comment > \# col_name data_type comment > id1 string full_ > id2 string full_ > id3 string full_ > \# Detailed Table Information > (ignore)... > {quote} > When execute `desc extended test_comment`, the problem is more obvious, > {quote} > col_name data_type comment > id1 string full_ > id2 string full_ > id3 string full_ > Detailed Table InformationTable(tableName:test_comment, > dbName:huanghuitest, owner:huanghui, createTime:1513095570, lastAccessTime:0, > retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:id1, type:string, > comment:full_name1), FieldSchema(name:id2, type:string, comment:full_ > {quote} > *the rest of the content is lost*. > The content is not really lost, it's just can not display normal. Because > hive store the result in LazyStruct, and LazyStruct use '\t' as field > separator: > {code:java} > // LazyStruct.java#parse() > // Go through all bytes in the byte[] > while (fieldByteEnd <= structByteEnd) { > if (fieldByteEnd == structByteEnd || bytes[fieldByteEnd] == separator) { > // Reached the end of a field? > if (lastColumnTakesRest && fieldId == fields.length - 1) { > fieldByteEnd = structByteEnd; > } > startPosition[fieldId] = fieldByteBegin; > fieldId++; > if (fieldId == fields.length || fieldByteEnd == structByteEnd) { > // All fields have been parsed, or bytes have been parsed. > // We need to set the startPosition of fields.length to ensure we > // can use the same formula to calculate the length of each field. > // For missing fields, their starting positions will all be the > same, > // which will make their lengths to be -1 and uncheckedGetField will > // return these fields as NULLs. > for (int i = fieldId; i <= fields.length; i++) { > startPosition[i] = fieldByteEnd + 1; > } > break; > } > fieldByteBegin = fieldByteEnd + 1; > fieldByteEnd++; > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18306) Fix spark smb tests
[ https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18306: -- Attachment: HIVE-18306.1.patch Updated result for bucketsortoptimize_insert_7 Removed spark result file for auto_sortmerge_join_10 as the test has union in it which is not supported in Spark SMB. > Fix spark smb tests > --- > > Key: HIVE-18306 > URL: https://issues.apache.org/jira/browse/HIVE-18306 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Deepak Jaiswal > Attachments: HIVE-18306.1.patch > > > seems to me that > {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and > {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is > failing since HIVE-18208 is in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18311) Enable smb_mapjoin_8.q for cli driver
[ https://issues.apache.org/jira/browse/HIVE-18311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297438#comment-16297438 ] Hive QA commented on HIVE-18311: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 9s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 2m 13s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 00212e0 | | modules | C: ql itests U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8326/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Enable smb_mapjoin_8.q for cli driver > - > > Key: HIVE-18311 > URL: https://issues.apache.org/jira/browse/HIVE-18311 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > Attachments: HIVE-18311.1.patch > > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18306) Fix spark smb tests
[ https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18306: -- Status: Open (was: Patch Available) > Fix spark smb tests > --- > > Key: HIVE-18306 > URL: https://issues.apache.org/jira/browse/HIVE-18306 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Deepak Jaiswal > > seems to me that > {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and > {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is > failing since HIVE-18208 is in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18306) Fix spark smb tests
[ https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18306: -- Status: Patch Available (was: Open) > Fix spark smb tests > --- > > Key: HIVE-18306 > URL: https://issues.apache.org/jira/browse/HIVE-18306 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Deepak Jaiswal > > seems to me that > {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and > {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is > failing since HIVE-18208 is in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18248) Clean up parameters
[ https://issues.apache.org/jira/browse/HIVE-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297425#comment-16297425 ] Hive QA commented on HIVE-18248: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12902898/HIVE-18248.3.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 11532 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[fp_literal_arithmetic] (batchId=68) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=157) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=165) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=160) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[auto_sortmerge_join_10] (batchId=138) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[bucketsortoptimize_insert_7] (batchId=128) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.cli.control.TestDanglingQOuts.checkDanglingQOut (batchId=209) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=226) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8325/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8325/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8325/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 15 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12902898 - PreCommit-HIVE-Build > Clean up parameters > --- > > Key: HIVE-18248 > URL: https://issues.apache.org/jira/browse/HIVE-18248 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > Attachments: HIVE-18248.1.patch, HIVE-18248.2.patch, > HIVE-18248.3.patch > > > Clean up of parameters that need not change at run time. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18316) HiveEndPoint should only work with full acid tables
[ https://issues.apache.org/jira/browse/HIVE-18316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18316: -- Attachment: HIVE-18316.01.patch > HiveEndPoint should only work with full acid tables > --- > > Key: HIVE-18316 > URL: https://issues.apache.org/jira/browse/HIVE-18316 > Project: Hive > Issue Type: Bug > Components: HCatalog, Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18316.01.patch > > > now that we have full acid and 1/4 acid the check needs to be updated to > check for full acid tables -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18316) HiveEndPoint should only work with full acid tables
[ https://issues.apache.org/jira/browse/HIVE-18316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman updated HIVE-18316: -- Status: Patch Available (was: Open) > HiveEndPoint should only work with full acid tables > --- > > Key: HIVE-18316 > URL: https://issues.apache.org/jira/browse/HIVE-18316 > Project: Hive > Issue Type: Bug > Components: HCatalog, Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > Attachments: HIVE-18316.01.patch > > > now that we have full acid and 1/4 acid the check needs to be updated to > check for full acid tables -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18306) Fix spark smb tests
[ https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal updated HIVE-18306: -- Status: Patch Available (was: In Progress) > Fix spark smb tests > --- > > Key: HIVE-18306 > URL: https://issues.apache.org/jira/browse/HIVE-18306 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Deepak Jaiswal > > seems to me that > {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and > {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is > failing since HIVE-18208 is in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18248) Clean up parameters
[ https://issues.apache.org/jira/browse/HIVE-18248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297402#comment-16297402 ] Hive QA commented on HIVE-18248: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 23s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 5m 36s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 13s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 38s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 35s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 7s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 13s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 36s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 46m 40s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile xml | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 00212e0 | | Default Java | 1.8.0_111 | | modules | C: common ql . itests/hive-unit U: . | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8325/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > Clean up parameters > --- > > Key: HIVE-18248 > URL: https://issues.apache.org/jira/browse/HIVE-18248 > Project: Hive > Issue Type: Bug >Reporter: Janaki Lahorani >Assignee: Janaki Lahorani > Fix For: 3.0.0 > > Attachments: HIVE-18248.1.patch, HIVE-18248.2.patch, > HIVE-18248.3.patch > > > Clean up of parameters that need not change at run time. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18310) Test 'vector_reduce_groupby_duplicate_cols.q' is misspelled in testconfiguration.properties
[ https://issues.apache.org/jira/browse/HIVE-18310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Sherman updated HIVE-18310: -- Attachment: HIVE-18310.2.patch add change to src/test/results/clientpositive/llap/vector_reduce_groupby_duplicate_cols.q.out > Test 'vector_reduce_groupby_duplicate_cols.q' is misspelled in > testconfiguration.properties > --- > > Key: HIVE-18310 > URL: https://issues.apache.org/jira/browse/HIVE-18310 > Project: Hive > Issue Type: Bug >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Minor > Attachments: HIVE-18310.1.patch, HIVE-18310.2.patch > > > The new testvector_reduce_groupby_duplicate_cols.q was introduced in > [HIVE-18258] but is misspelled in testconfiguration.properties: > {noformat} > - vector_reduce_grpupby_duplicate_cols.q,\ > + vector_reduce_groupby_duplicate_cols.q,\ > {noformat} > I noticed this because TestDanglingQOuts.checkDanglingQOut failed -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18310) Test 'vector_reduce_groupby_duplicate_cols.q' is misspelled in testconfiguration.properties
[ https://issues.apache.org/jira/browse/HIVE-18310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297398#comment-16297398 ] Matt McCline commented on HIVE-18310: - Thanks for spotting this. The difference looks ok to me. > Test 'vector_reduce_groupby_duplicate_cols.q' is misspelled in > testconfiguration.properties > --- > > Key: HIVE-18310 > URL: https://issues.apache.org/jira/browse/HIVE-18310 > Project: Hive > Issue Type: Bug >Reporter: Andrew Sherman >Assignee: Andrew Sherman >Priority: Minor > Attachments: HIVE-18310.1.patch > > > The new testvector_reduce_groupby_duplicate_cols.q was introduced in > [HIVE-18258] but is misspelled in testconfiguration.properties: > {noformat} > - vector_reduce_grpupby_duplicate_cols.q,\ > + vector_reduce_groupby_duplicate_cols.q,\ > {noformat} > I noticed this because TestDanglingQOuts.checkDanglingQOut failed -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18316) HiveEndPoint should only work with full acid tables
[ https://issues.apache.org/jira/browse/HIVE-18316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eugene Koifman reassigned HIVE-18316: - > HiveEndPoint should only work with full acid tables > --- > > Key: HIVE-18316 > URL: https://issues.apache.org/jira/browse/HIVE-18316 > Project: Hive > Issue Type: Bug > Components: HCatalog, Transactions >Affects Versions: 3.0.0 >Reporter: Eugene Koifman >Assignee: Eugene Koifman > > now that we have full acid and 1/4 acid the check needs to be updated to > check for full acid tables -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Work started] (HIVE-18306) Fix spark smb tests
[ https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HIVE-18306 started by Deepak Jaiswal. - > Fix spark smb tests > --- > > Key: HIVE-18306 > URL: https://issues.apache.org/jira/browse/HIVE-18306 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Deepak Jaiswal > > seems to me that > {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and > {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is > failing since HIVE-18208 is in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Assigned] (HIVE-18306) Fix spark smb tests
[ https://issues.apache.org/jira/browse/HIVE-18306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepak Jaiswal reassigned HIVE-18306: - Assignee: Deepak Jaiswal > Fix spark smb tests > --- > > Key: HIVE-18306 > URL: https://issues.apache.org/jira/browse/HIVE-18306 > Project: Hive > Issue Type: Bug >Reporter: Zoltan Haindrich >Assignee: Deepak Jaiswal > > seems to me that > {{TestSparkCliDriver#testCliDriver\[auto_sortmerge_join_10\]}} and > {{TestSparkCliDriver#testCliDriver\[bucketsortoptimize_insert_7\]}} is > failing since HIVE-18208 is in. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18297) Add builder for metastore Thrift classes missed in the first pass
[ https://issues.apache.org/jira/browse/HIVE-18297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297378#comment-16297378 ] Peter Vary commented on HIVE-18297: --- I will be on PTO till January, and the tests will provide plenty to concentrate on. So if we do not want to hold back other efforts we should split it. Just wanted to avoid the duplicate work, since the functionbuilder is ready :) Thanks, Peter > Add builder for metastore Thrift classes missed in the first pass > - > > Key: HIVE-18297 > URL: https://issues.apache.org/jira/browse/HIVE-18297 > Project: Hive > Issue Type: Task > Components: Standalone Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates > > The first pass of adding builders for the metastore Thrift classes missed > Function, statistics, and WM* objects. Builders for these should be added. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18297) Add builder for metastore Thrift classes missed in the first pass
[ https://issues.apache.org/jira/browse/HIVE-18297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297363#comment-16297363 ] Alan Gates commented on HIVE-18297: --- Sure, if you want to take this JIRA over go for it. Or you can break out the functions into a separate JIRA and post your patch there and we'll use this one just to track the WM* and stats builders. > Add builder for metastore Thrift classes missed in the first pass > - > > Key: HIVE-18297 > URL: https://issues.apache.org/jira/browse/HIVE-18297 > Project: Hive > Issue Type: Task > Components: Standalone Metastore >Affects Versions: 3.0.0 >Reporter: Alan Gates >Assignee: Alan Gates > > The first pass of adding builders for the metastore Thrift classes missed > Function, statistics, and WM* objects. Builders for these should be added. -- This message was sent by Atlassian JIRA (v6.4.14#64029)