Re: Review Request 71218: HIVE-4605 Hive job fails while closing reducer output - Unable to rename
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/71218/ --- (Updated Aug. 5, 2019, 6:44 p.m.) Review request for hive. Changes --- Added not null verification for finalPaths[idx]. Repository: hive-git Description --- Rebase the origin commit for master branch Diffs (updated) - ql/src/java/org/apache/hadoop/hive/ql/exec/FileSinkOperator.java 9ad4e71482 Diff: https://reviews.apache.org/r/71218/diff/2/ Changes: https://reviews.apache.org/r/71218/diff/1-2/ Testing --- Thanks, Oleksiy Sayankin
[jira] [Created] (HIVE-22084) Implement exchange partitions related methods on SessionHiveMetastoreClient
Laszlo Pinter created HIVE-22084: Summary: Implement exchange partitions related methods on SessionHiveMetastoreClient Key: HIVE-22084 URL: https://issues.apache.org/jira/browse/HIVE-22084 Project: Hive Issue Type: Sub-task Components: Hive Reporter: Laszlo Pinter Assignee: Laszlo Pinter Fix For: 4.0.0 IMetaStoreClient exposes the following methods related to exchanging partitions: {code:java} Partition exchange_partition(Map partitionSpecs, String sourceDb, String sourceTable, String destdb, String destTableName); Partition exchange_partition(Map partitionSpecs, String sourceCat, String sourceDb, String sourceTable, String destCat, String destdb, String destTableName); List exchange_partitions(Map partitionSpecs, String sourceDb, String sourceTable, String destdb, String destTableName); List exchange_partitions(Map partitionSpecs, String sourceCat, String sourceDb, String sourceTable, String destCat, String destdb, String destTableName);{code} In order to support partitions on temporary tables, these methods must be implemented. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (HIVE-22083) Values of tag order cannot be null, so it can be "byte" instead of "Byte"
Ivan Suller created HIVE-22083: -- Summary: Values of tag order cannot be null, so it can be "byte" instead of "Byte" Key: HIVE-22083 URL: https://issues.apache.org/jira/browse/HIVE-22083 Project: Hive Issue Type: Improvement Components: Hive Reporter: Ivan Suller Values of tag order cannot be null, so it can be "byte" instead of "Byte". Switching between Byte and byte is "cheap" - the Byte objects are cached by the JVM - but it still costs a bit more memory and CPU usage. -- This message was sent by Atlassian JIRA (v7.6.14#76016)
[jira] [Created] (HIVE-22082) SQL 错误 [2] [08S01]: Error while processing statement: FAILED: Execution Error
冯伟 created HIVE-22082: - Summary: SQL 错误 [2] [08S01]: Error while processing statement: FAILED: Execution Error Key: HIVE-22082 URL: https://issues.apache.org/jira/browse/HIVE-22082 Project: Hive Issue Type: Bug Components: Hive, hpl/sql, SQL Affects Versions: 3.1.0 Environment: 报错环境为hive的3.1.0.3.1.0.0-78,可以运行的环境是 Hive 1.2.1000 Reporter: 冯伟 hive 1.2.1旧版本SQL语句运行没问题,3.1版本运行报错: SQL语句: {code:java} //代码占位符 选择id,从ti_ins.instinct_result_info侧面视图爆炸触发(split(concat_ws(',',rule_triggered_1,rule_triggered_2),','))num作为触发LIMIT 1{code} {code:java} //代码占位符SQL 错误 [2] [08S01]: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1563265369069_0167_15_00, diagnostics=[Task failed, taskId=task_1563265369069_0167_15_00_03, diagnostics=[TaskAttempt 0 failed, info=[Error: Error while running task ( failure ) : attempt_1563265369069_0167_15_00_03_0:java.lang.RuntimeException: java.lang.RuntimeException: java.io.IOException: java.lang.IllegalArgumentException: bucketId out of range: -1 at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:296) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:250) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:374) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:73) at org.apache.tez.runtime.task.TaskRunner2Callable$1.run(TaskRunner2Callable.java:61) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:61) at org.apache.tez.runtime.task.TaskRunner2Callable.callInternal(TaskRunner2Callable.java:37) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:108) at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:41) at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:77) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.RuntimeException: java.io.IOException: java.lang.IllegalArgumentException: bucketId out of range: -1 at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:206) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:145) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:111) at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:157) at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:83) at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:703) at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:662) at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:150) at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:114) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.getMRInput(MapRecordProcessor.java:532) at org.apache.hadoop.hive.ql.exec.tez.MapRecordProcessor.init(MapRecordProcessor.java:178) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:266) ... 16 more Caused by: java.io.IOException: java.lang.IllegalArgumentException: bucketId out of range: -1 at org.apache.hadoop.hive.io.HiveIOExceptionHandlerChain.handleRecordReaderCreationException(HiveIOExceptionHandlerChain.java:97) at org.apache.hadoop.hive.io.HiveIOExceptionHandlerUtil.handleRecordReaderCreationException(HiveIOExceptionHandlerUtil.java:57) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:421) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat $ TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:203) ..还有27个{code} -- This message was sent by Atlassian JIRA (v7.6.14#76016)