[jira] [Commented] (HIVE-14615) Temp table leaves behind insert command
[ https://issues.apache.org/jira/browse/HIVE-14615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16305043#comment-16305043 ] Madhudeep Petwal commented on HIVE-14615: - temp tables are created in genValuesTempTable() function, I was thinking of dropping the table using dropTableOrPartitions() func after query successfully inserts. Btw have not coded yet. > Temp table leaves behind insert command > --- > > Key: HIVE-14615 > URL: https://issues.apache.org/jira/browse/HIVE-14615 > Project: Hive > Issue Type: Bug > Components: Query Processor >Reporter: Chaoyu Tang >Assignee: Madhudeep Petwal > > {code} > create table test (key int, value string); > insert into test values (1, 'val1'); > show tables; > test > values__tmp__table__1 > {code} > the temp table values__tmp__table__1 was resulted from insert into ...values > and exists until logout the session. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-18346) Beeline could not launch because of size of history
[ https://issues.apache.org/jira/browse/HIVE-18346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] JQu updated HIVE-18346: --- Summary: Beeline could not launch because of size of history (was: Beeline version 1.2.1) > Beeline could not launch because of size of history > --- > > Key: HIVE-18346 > URL: https://issues.apache.org/jira/browse/HIVE-18346 > Project: Hive > Issue Type: Bug > Components: Beeline >Affects Versions: 1.2.1 >Reporter: JQu >Priority: Minor > > Beeline version 1.2.1 could not launch when the size of > ${user.home}/.beeline/history larger than 39MB. Which reports > "java.lang.outofmemoryerror" . -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18336) add Safe Mode
[ https://issues.apache.org/jira/browse/HIVE-18336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304752#comment-16304752 ] Andrew Sherman commented on HIVE-18336: --- [~ekoifman] Can you explain a bit more what this jira means? Thanks > add Safe Mode > - > > Key: HIVE-18336 > URL: https://issues.apache.org/jira/browse/HIVE-18336 > Project: Hive > Issue Type: Bug > Components: Transactions >Reporter: Eugene Koifman >Assignee: Eugene Koifman > -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18343) Remove LinkedList from ColumnStatsSemanticAnalyzer.java
[ https://issues.apache.org/jira/browse/HIVE-18343?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304713#comment-16304713 ] BELUGA BEHR commented on HIVE-18343: {code} ./ql/src/java/org/apache/hadoop/hive/ql/parse/ColumnStatsSemanticAnalyzer.java:99: List colName = new ArrayList(numCols);: warning: 'block' child have incorrect indentation level 8, expected level should be 6. ./ql/src/java/org/apache/hadoop/hive/ql/parse/ColumnStatsSemanticAnalyzer.java:99: List colName = new ArrayList(numCols);: warning: 'member def type' have incorrect indentation level 8, expected level should be 6. {code} > Remove LinkedList from ColumnStatsSemanticAnalyzer.java > --- > > Key: HIVE-18343 > URL: https://issues.apache.org/jira/browse/HIVE-18343 > Project: Hive > Issue Type: Improvement > Components: HiveServer2 >Affects Versions: 3.0.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HIVE-18343.1.patch > > > Remove {{LinkedList}} in favor of {{ArrayList}} for class > {{org.apache.hadoop.hive.ql.parse.ColumnStatsSemanticAnalyzer}}. > {quote} > The size, isEmpty, get, set, iterator, and listIterator operations run in > constant time. The add operation runs in amortized constant time, that is, > adding n elements requires O\(n\) time. All of the other operations run in > linear time (roughly speaking). *The constant factor is low compared to that > for the LinkedList implementation.* > {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-16480) ORC file with empty array and array fails to read
[ https://issues.apache.org/jira/browse/HIVE-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304711#comment-16304711 ] Owen O'Malley commented on HIVE-16480: -- This patch applies to branch-2.1 and branch-2.2. In branch-2.3 and above Hive uses the ORC project artifacts, so we'll need to release from ORC. Once the patch goes in, we should start that process. > ORC file with empty array and array fails to read > > > Key: HIVE-16480 > URL: https://issues.apache.org/jira/browse/HIVE-16480 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.1, 2.2.0 >Reporter: David Capwell >Assignee: Owen O'Malley > Labels: pull-request-available > > We have a schema that has a array in it. We were unable to read this > file and digging into ORC it seems that the issue is when the array is empty. > Here is the stack trace > {code:title=EmptyList.log|borderStyle=solid} > ERROR 2017-04-19 09:29:17,075 [main] [EmptyList] [line 56] Failed to work > with type float > java.io.IOException: Error reading file: > /var/folders/t8/t5x1031d7mn17f6xpwnkkv_4gn/T/1492619355819-0/file-float.orc > at > org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1052) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.nextBatch(RecordReaderImpl.java:135) > ~[hive-exec-2.1.1.jar:2.1.1] > at EmptyList.emptyList(EmptyList.java:49) ~[test-classes/:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.12.jar:4.12] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > [junit-4.12.jar:4.12] > at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12] > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237) > [junit-rt.jar:na] > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > [junit-rt.jar:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] > at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) > [idea_rt.jar:na] > Caused by: java.io.EOFException: Read past EOF for compressed stream Stream > for column 1 kind DATA position: 0 length: 0 range: 0 offset: 0 limit: 0 > at > org.apache.orc.impl.SerializationUtils.readFully(SerializationUtils.java:118) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.SerializationUtils.readFloat(SerializationUtils.java:78) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$FloatTreeReader.nextVector(TreeReaderFactory.java:619) > ~[hive-orc-2.1.1.jar:2.1.1] > at >
[jira] [Updated] (HIVE-16480) ORC file with empty array and array fails to read
[ https://issues.apache.org/jira/browse/HIVE-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley updated HIVE-16480: - Affects Version/s: 2.2.0 > ORC file with empty array and array fails to read > > > Key: HIVE-16480 > URL: https://issues.apache.org/jira/browse/HIVE-16480 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.1, 2.2.0 >Reporter: David Capwell >Assignee: Owen O'Malley > Labels: pull-request-available > > We have a schema that has a array in it. We were unable to read this > file and digging into ORC it seems that the issue is when the array is empty. > Here is the stack trace > {code:title=EmptyList.log|borderStyle=solid} > ERROR 2017-04-19 09:29:17,075 [main] [EmptyList] [line 56] Failed to work > with type float > java.io.IOException: Error reading file: > /var/folders/t8/t5x1031d7mn17f6xpwnkkv_4gn/T/1492619355819-0/file-float.orc > at > org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1052) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.nextBatch(RecordReaderImpl.java:135) > ~[hive-exec-2.1.1.jar:2.1.1] > at EmptyList.emptyList(EmptyList.java:49) ~[test-classes/:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.12.jar:4.12] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > [junit-4.12.jar:4.12] > at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12] > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237) > [junit-rt.jar:na] > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > [junit-rt.jar:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] > at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) > [idea_rt.jar:na] > Caused by: java.io.EOFException: Read past EOF for compressed stream Stream > for column 1 kind DATA position: 0 length: 0 range: 0 offset: 0 limit: 0 > at > org.apache.orc.impl.SerializationUtils.readFully(SerializationUtils.java:118) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.SerializationUtils.readFloat(SerializationUtils.java:78) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$FloatTreeReader.nextVector(TreeReaderFactory.java:619) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$ListTreeReader.nextVector(TreeReaderFactory.java:1902) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$TreeReader.nextBatch(TreeReaderFactory.java:154) > ~[hive-orc-2.1.1.jar:2.1.1] >
[jira] [Updated] (HIVE-16480) ORC file with empty array and array fails to read
[ https://issues.apache.org/jira/browse/HIVE-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley updated HIVE-16480: - Status: Patch Available (was: Open) > ORC file with empty array and array fails to read > > > Key: HIVE-16480 > URL: https://issues.apache.org/jira/browse/HIVE-16480 > Project: Hive > Issue Type: Bug >Affects Versions: 2.2.0, 2.1.1 >Reporter: David Capwell >Assignee: Owen O'Malley > Labels: pull-request-available > > We have a schema that has a array in it. We were unable to read this > file and digging into ORC it seems that the issue is when the array is empty. > Here is the stack trace > {code:title=EmptyList.log|borderStyle=solid} > ERROR 2017-04-19 09:29:17,075 [main] [EmptyList] [line 56] Failed to work > with type float > java.io.IOException: Error reading file: > /var/folders/t8/t5x1031d7mn17f6xpwnkkv_4gn/T/1492619355819-0/file-float.orc > at > org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1052) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.nextBatch(RecordReaderImpl.java:135) > ~[hive-exec-2.1.1.jar:2.1.1] > at EmptyList.emptyList(EmptyList.java:49) ~[test-classes/:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.12.jar:4.12] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > [junit-4.12.jar:4.12] > at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12] > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237) > [junit-rt.jar:na] > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > [junit-rt.jar:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] > at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) > [idea_rt.jar:na] > Caused by: java.io.EOFException: Read past EOF for compressed stream Stream > for column 1 kind DATA position: 0 length: 0 range: 0 offset: 0 limit: 0 > at > org.apache.orc.impl.SerializationUtils.readFully(SerializationUtils.java:118) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.SerializationUtils.readFloat(SerializationUtils.java:78) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$FloatTreeReader.nextVector(TreeReaderFactory.java:619) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$ListTreeReader.nextVector(TreeReaderFactory.java:1902) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$TreeReader.nextBatch(TreeReaderFactory.java:154) >
[jira] [Updated] (HIVE-16480) ORC file with empty array and array fails to read
[ https://issues.apache.org/jira/browse/HIVE-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ASF GitHub Bot updated HIVE-16480: -- Labels: pull-request-available (was: ) > ORC file with empty array and array fails to read > > > Key: HIVE-16480 > URL: https://issues.apache.org/jira/browse/HIVE-16480 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.1 >Reporter: David Capwell >Assignee: Owen O'Malley > Labels: pull-request-available > > We have a schema that has a array in it. We were unable to read this > file and digging into ORC it seems that the issue is when the array is empty. > Here is the stack trace > {code:title=EmptyList.log|borderStyle=solid} > ERROR 2017-04-19 09:29:17,075 [main] [EmptyList] [line 56] Failed to work > with type float > java.io.IOException: Error reading file: > /var/folders/t8/t5x1031d7mn17f6xpwnkkv_4gn/T/1492619355819-0/file-float.orc > at > org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1052) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.nextBatch(RecordReaderImpl.java:135) > ~[hive-exec-2.1.1.jar:2.1.1] > at EmptyList.emptyList(EmptyList.java:49) ~[test-classes/:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.12.jar:4.12] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > [junit-4.12.jar:4.12] > at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12] > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237) > [junit-rt.jar:na] > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > [junit-rt.jar:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] > at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) > [idea_rt.jar:na] > Caused by: java.io.EOFException: Read past EOF for compressed stream Stream > for column 1 kind DATA position: 0 length: 0 range: 0 offset: 0 limit: 0 > at > org.apache.orc.impl.SerializationUtils.readFully(SerializationUtils.java:118) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.SerializationUtils.readFloat(SerializationUtils.java:78) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$FloatTreeReader.nextVector(TreeReaderFactory.java:619) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$ListTreeReader.nextVector(TreeReaderFactory.java:1902) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$TreeReader.nextBatch(TreeReaderFactory.java:154) >
[jira] [Commented] (HIVE-16480) ORC file with empty array and array fails to read
[ https://issues.apache.org/jira/browse/HIVE-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304707#comment-16304707 ] ASF GitHub Bot commented on HIVE-16480: --- GitHub user omalley opened a pull request: https://github.com/apache/hive/pull/285 HIVE-16480 (ORC-285) Empty vector batches of floats or doubles gets EOFException. Signed-off-by: Owen O'MalleyYou can merge this pull request into a Git repository by running: $ git pull https://github.com/omalley/hive hive-16480-2.1 Alternatively you can review and apply these changes as the patch at: https://github.com/apache/hive/pull/285.patch To close this pull request, make a commit to your master/trunk branch with (at least) the following in the commit message: This closes #285 commit 43d7fe2f0fc9baeb311814da1f7a65cfd546145b Author: Owen O'Malley Date: 2017-12-27T17:45:25Z HIVE-16480 (ORC-285) Empty vector batches of floats or doubles gets EOFException. Signed-off-by: Owen O'Malley > ORC file with empty array and array fails to read > > > Key: HIVE-16480 > URL: https://issues.apache.org/jira/browse/HIVE-16480 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.1 >Reporter: David Capwell >Assignee: Owen O'Malley > Labels: pull-request-available > > We have a schema that has a array in it. We were unable to read this > file and digging into ORC it seems that the issue is when the array is empty. > Here is the stack trace > {code:title=EmptyList.log|borderStyle=solid} > ERROR 2017-04-19 09:29:17,075 [main] [EmptyList] [line 56] Failed to work > with type float > java.io.IOException: Error reading file: > /var/folders/t8/t5x1031d7mn17f6xpwnkkv_4gn/T/1492619355819-0/file-float.orc > at > org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1052) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.nextBatch(RecordReaderImpl.java:135) > ~[hive-exec-2.1.1.jar:2.1.1] > at EmptyList.emptyList(EmptyList.java:49) ~[test-classes/:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.12.jar:4.12] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > [junit-4.12.jar:4.12] > at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12] > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237) > [junit-rt.jar:na] > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > [junit-rt.jar:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at
[jira] [Commented] (HIVE-14615) Temp table leaves behind insert command
[ https://issues.apache.org/jira/browse/HIVE-14615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304695#comment-16304695 ] Andrew Sherman commented on HIVE-14615: --- Hi [~minions] I actually have 2 different fixes for this all coded up. I have been distracted for a while and have not submitted. Did you have an idea of how to fix this? > Temp table leaves behind insert command > --- > > Key: HIVE-14615 > URL: https://issues.apache.org/jira/browse/HIVE-14615 > Project: Hive > Issue Type: Bug > Components: Query Processor >Reporter: Chaoyu Tang >Assignee: Madhudeep Petwal > > {code} > create table test (key int, value string); > insert into test values (1, 'val1'); > show tables; > test > values__tmp__table__1 > {code} > the temp table values__tmp__table__1 was resulted from insert into ...values > and exists until logout the session. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-16480) ORC file with empty array and array fails to read
[ https://issues.apache.org/jira/browse/HIVE-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304684#comment-16304684 ] Owen O'Malley commented on HIVE-16480: -- We can use this jira to backport the fix from ORC-285. > ORC file with empty array and array fails to read > > > Key: HIVE-16480 > URL: https://issues.apache.org/jira/browse/HIVE-16480 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.1 >Reporter: David Capwell >Assignee: Owen O'Malley > > We have a schema that has a array in it. We were unable to read this > file and digging into ORC it seems that the issue is when the array is empty. > Here is the stack trace > {code:title=EmptyList.log|borderStyle=solid} > ERROR 2017-04-19 09:29:17,075 [main] [EmptyList] [line 56] Failed to work > with type float > java.io.IOException: Error reading file: > /var/folders/t8/t5x1031d7mn17f6xpwnkkv_4gn/T/1492619355819-0/file-float.orc > at > org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1052) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.nextBatch(RecordReaderImpl.java:135) > ~[hive-exec-2.1.1.jar:2.1.1] > at EmptyList.emptyList(EmptyList.java:49) ~[test-classes/:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.12.jar:4.12] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > [junit-4.12.jar:4.12] > at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12] > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237) > [junit-rt.jar:na] > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > [junit-rt.jar:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] > at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) > [idea_rt.jar:na] > Caused by: java.io.EOFException: Read past EOF for compressed stream Stream > for column 1 kind DATA position: 0 length: 0 range: 0 offset: 0 limit: 0 > at > org.apache.orc.impl.SerializationUtils.readFully(SerializationUtils.java:118) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.SerializationUtils.readFloat(SerializationUtils.java:78) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$FloatTreeReader.nextVector(TreeReaderFactory.java:619) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$ListTreeReader.nextVector(TreeReaderFactory.java:1902) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$TreeReader.nextBatch(TreeReaderFactory.java:154) >
[jira] [Assigned] (HIVE-16480) ORC file with empty array and array fails to read
[ https://issues.apache.org/jira/browse/HIVE-16480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Owen O'Malley reassigned HIVE-16480: Assignee: Owen O'Malley > ORC file with empty array and array fails to read > > > Key: HIVE-16480 > URL: https://issues.apache.org/jira/browse/HIVE-16480 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.1 >Reporter: David Capwell >Assignee: Owen O'Malley > > We have a schema that has a array in it. We were unable to read this > file and digging into ORC it seems that the issue is when the array is empty. > Here is the stack trace > {code:title=EmptyList.log|borderStyle=solid} > ERROR 2017-04-19 09:29:17,075 [main] [EmptyList] [line 56] Failed to work > with type float > java.io.IOException: Error reading file: > /var/folders/t8/t5x1031d7mn17f6xpwnkkv_4gn/T/1492619355819-0/file-float.orc > at > org.apache.orc.impl.RecordReaderImpl.nextBatch(RecordReaderImpl.java:1052) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.nextBatch(RecordReaderImpl.java:135) > ~[hive-exec-2.1.1.jar:2.1.1] > at EmptyList.emptyList(EmptyList.java:49) ~[test-classes/:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > [junit-4.12.jar:4.12] > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47) > [junit-4.12.jar:4.12] > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78) > [junit-4.12.jar:4.12] > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268) > [junit-4.12.jar:4.12] > at org.junit.runners.ParentRunner.run(ParentRunner.java:363) > [junit-4.12.jar:4.12] > at org.junit.runner.JUnitCore.run(JUnitCore.java:137) [junit-4.12.jar:4.12] > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:51) > [junit-rt.jar:na] > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:237) > [junit-rt.jar:na] > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > [junit-rt.jar:na] > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > ~[na:1.8.0_121] > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > ~[na:1.8.0_121] > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > ~[na:1.8.0_121] > at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_121] > at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147) > [idea_rt.jar:na] > Caused by: java.io.EOFException: Read past EOF for compressed stream Stream > for column 1 kind DATA position: 0 length: 0 range: 0 offset: 0 limit: 0 > at > org.apache.orc.impl.SerializationUtils.readFully(SerializationUtils.java:118) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.SerializationUtils.readFloat(SerializationUtils.java:78) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$FloatTreeReader.nextVector(TreeReaderFactory.java:619) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$ListTreeReader.nextVector(TreeReaderFactory.java:1902) > ~[hive-orc-2.1.1.jar:2.1.1] > at > org.apache.orc.impl.TreeReaderFactory$TreeReader.nextBatch(TreeReaderFactory.java:154) > ~[hive-orc-2.1.1.jar:2.1.1] > at >
[jira] [Commented] (HIVE-14759) GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters
[ https://issues.apache.org/jira/browse/HIVE-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304654#comment-16304654 ] Hive QA commented on HIVE-14759: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12903813/HIVE-14759.2.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 22 failed/errored test(s), 11542 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[mapjoin_hook] (batchId=12) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[cbo_rp_lineage2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=156) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage2] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage3] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols] (batchId=158) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testTransactionalValidation (batchId=213) org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnTez (batchId=222) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=253) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=225) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=231) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=231) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=231) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8386/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8386/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8386/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 22 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12903813 - PreCommit-HIVE-Build > GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters > - > > Key: HIVE-14759 > URL: https://issues.apache.org/jira/browse/HIVE-14759 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 2.1.0 >Reporter: Clemens Valiente >Assignee: Clemens Valiente >Priority: Trivial > Attachments: HIVE-14759.1.patch, HIVE-14759.2.patch, HIVE-14759.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > {code} > return getClass().getSimpleName().substring(10).toLowerCase(); > {code} > causes > {code} > java.lang.StringIndexOutOfBoundsException: String index out of range: -2 > at java.lang.String.substring(String.java:1875) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDF.getFuncName(GenericUDF.java:258) > {code} > if the Classname of my UDF is less than 10 characters. > this was probably to remove "GenericUDF" from the classname but causes issues > if the class doesn't start with it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-14759) GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters
[ https://issues.apache.org/jira/browse/HIVE-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304596#comment-16304596 ] Hive QA commented on HIVE-14759: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 41s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 17s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 035eca3 | | Default Java | 1.8.0_111 | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8386/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters > - > > Key: HIVE-14759 > URL: https://issues.apache.org/jira/browse/HIVE-14759 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 2.1.0 >Reporter: Clemens Valiente >Assignee: Clemens Valiente >Priority: Trivial > Attachments: HIVE-14759.1.patch, HIVE-14759.2.patch, HIVE-14759.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > {code} > return getClass().getSimpleName().substring(10).toLowerCase(); > {code} > causes > {code} > java.lang.StringIndexOutOfBoundsException: String index out of range: -2 > at java.lang.String.substring(String.java:1875) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDF.getFuncName(GenericUDF.java:258) > {code} > if the Classname of my UDF is less than 10 characters. > this was probably to remove "GenericUDF" from the classname but causes issues > if the class doesn't start with it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-14759) GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters
[ https://issues.apache.org/jira/browse/HIVE-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304587#comment-16304587 ] Hive QA commented on HIVE-14759: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12903805/HIVE-14759.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 24 failed/errored test(s), 11542 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[auto_join25] (batchId=72) org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver[ppd_join5] (batchId=35) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucket_map_join_tez1] (batchId=169) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[bucketsortoptimize_insert_2] (batchId=151) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[cbo_rp_lineage2] (batchId=152) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[hybridgrace_hashjoin_2] (batchId=156) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[insert_values_orig_table_use_metadata] (batchId=164) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage2] (batchId=163) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[lineage3] (batchId=160) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid] (batchId=168) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[llap_acid_fast] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[sysdb] (batchId=159) org.apache.hadoop.hive.cli.TestMiniLlapLocalCliDriver.testCliDriver[vector_reduce_groupby_duplicate_cols] (batchId=158) org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver[bucketizedhiveinputformat] (batchId=177) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[authorization_part] (batchId=93) org.apache.hadoop.hive.cli.TestNegativeCliDriver.testCliDriver[stats_aggregator_error_1] (batchId=93) org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver[ppd_join5] (batchId=120) org.apache.hadoop.hive.metastore.TestEmbeddedHiveMetaStore.testTransactionalValidation (batchId=213) org.apache.hadoop.hive.ql.TestAcidOnTez.testMapJoinOnTez (batchId=222) org.apache.hadoop.hive.ql.io.TestDruidRecordWriter.testWrite (batchId=253) org.apache.hadoop.hive.ql.parse.TestReplicationScenarios.testConstraints (batchId=225) org.apache.hive.jdbc.TestSSL.testConnectionMismatch (batchId=231) org.apache.hive.jdbc.TestSSL.testConnectionWrongCertCN (batchId=231) org.apache.hive.jdbc.TestSSL.testMetastoreConnectionWrongCertCN (batchId=231) {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8385/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8385/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8385/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.YetusPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 24 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12903805 - PreCommit-HIVE-Build > GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters > - > > Key: HIVE-14759 > URL: https://issues.apache.org/jira/browse/HIVE-14759 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 2.1.0 >Reporter: Clemens Valiente >Assignee: Clemens Valiente >Priority: Trivial > Attachments: HIVE-14759.1.patch, HIVE-14759.2.patch, HIVE-14759.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > {code} > return getClass().getSimpleName().substring(10).toLowerCase(); > {code} > causes > {code} > java.lang.StringIndexOutOfBoundsException: String index out of range: -2 > at java.lang.String.substring(String.java:1875) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDF.getFuncName(GenericUDF.java:258) > {code} > if the Classname of my UDF is less than 10 characters. > this was probably to remove "GenericUDF" from the classname but causes issues > if the class doesn't start with it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-14759) GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters
[ https://issues.apache.org/jira/browse/HIVE-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Clemens Valiente updated HIVE-14759: Attachment: HIVE-14759.2.patch fix checkstyle > GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters > - > > Key: HIVE-14759 > URL: https://issues.apache.org/jira/browse/HIVE-14759 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 2.1.0 >Reporter: Clemens Valiente >Assignee: Clemens Valiente >Priority: Trivial > Attachments: HIVE-14759.1.patch, HIVE-14759.2.patch, HIVE-14759.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > {code} > return getClass().getSimpleName().substring(10).toLowerCase(); > {code} > causes > {code} > java.lang.StringIndexOutOfBoundsException: String index out of range: -2 > at java.lang.String.substring(String.java:1875) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDF.getFuncName(GenericUDF.java:258) > {code} > if the Classname of my UDF is less than 10 characters. > this was probably to remove "GenericUDF" from the classname but causes issues > if the class doesn't start with it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-14759) GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters
[ https://issues.apache.org/jira/browse/HIVE-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304568#comment-16304568 ] Hive QA commented on HIVE-14759: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} master Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 55s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} master passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} master passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 30s{color} | {color:red} ql: The patch generated 1 new + 10 unchanged - 0 fixed = 11 total (was 10) {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 12s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 13m 15s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Optional Tests | asflicense javac javadoc findbugs checkstyle compile | | uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03) x86_64 GNU/Linux | | Build tool | maven | | Personality | /data/hiveptest/working/yetus/dev-support/hive-personality.sh | | git revision | master / 035eca3 | | Default Java | 1.8.0_111 | | checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-8385/yetus/diff-checkstyle-ql.txt | | modules | C: ql U: ql | | Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-8385/yetus.txt | | Powered by | Apache Yetushttp://yetus.apache.org | This message was automatically generated. > GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters > - > > Key: HIVE-14759 > URL: https://issues.apache.org/jira/browse/HIVE-14759 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 2.1.0 >Reporter: Clemens Valiente >Assignee: Clemens Valiente >Priority: Trivial > Attachments: HIVE-14759.1.patch, HIVE-14759.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > {code} > return getClass().getSimpleName().substring(10).toLowerCase(); > {code} > causes > {code} > java.lang.StringIndexOutOfBoundsException: String index out of range: -2 > at java.lang.String.substring(String.java:1875) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDF.getFuncName(GenericUDF.java:258) > {code} > if the Classname of my UDF is less than 10 characters. > this was probably to remove "GenericUDF" from the classname but causes issues > if the class doesn't start with it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-14759) GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters
[ https://issues.apache.org/jira/browse/HIVE-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Clemens Valiente updated HIVE-14759: Attachment: HIVE-14759.1.patch > GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters > - > > Key: HIVE-14759 > URL: https://issues.apache.org/jira/browse/HIVE-14759 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 2.1.0 >Reporter: Clemens Valiente >Assignee: Clemens Valiente >Priority: Trivial > Attachments: HIVE-14759.1.patch, HIVE-14759.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > {code} > return getClass().getSimpleName().substring(10).toLowerCase(); > {code} > causes > {code} > java.lang.StringIndexOutOfBoundsException: String index out of range: -2 > at java.lang.String.substring(String.java:1875) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDF.getFuncName(GenericUDF.java:258) > {code} > if the Classname of my UDF is less than 10 characters. > this was probably to remove "GenericUDF" from the classname but causes issues > if the class doesn't start with it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-14759) GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters
[ https://issues.apache.org/jira/browse/HIVE-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304476#comment-16304476 ] Hive QA commented on HIVE-14759: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12903796/HIVE-14759.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-Build/8384/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-Build/8384/console Test logs: http://104.198.109.242/logs/PreCommit-HIVE-Build-8384/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hiveptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ date '+%Y-%m-%d %T.%3N' 2017-12-27 11:35:16.373 + [[ -n /usr/lib/jvm/java-8-openjdk-amd64 ]] + export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 + export PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/lib/jvm/java-8-openjdk-amd64/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'MAVEN_OPTS=-Xmx1g ' + MAVEN_OPTS='-Xmx1g ' + cd /data/hiveptest/working/ + tee /data/hiveptest/logs/PreCommit-HIVE-Build-8384/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + date '+%Y-%m-%d %T.%3N' 2017-12-27 11:35:16.376 + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 035eca3 HIVE-18331 : Renew the Kerberos ticket used by Druid Query runner (Slim Bouguerra via Sergey Shelukhin) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 035eca3 HIVE-18331 : Renew the Kerberos ticket used by Druid Query runner (Slim Bouguerra via Sergey Shelukhin) + git merge --ff-only origin/master Already up-to-date. + date '+%Y-%m-%d %T.%3N' 2017-12-27 11:35:21.326 + rm -rf ../yetus + mkdir ../yetus + cp -R . ../yetus + mkdir /data/hiveptest/logs/PreCommit-HIVE-Build-8384/yetus + patchCommandPath=/data/hiveptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hiveptest/working/scratch/build.patch + [[ -f /data/hiveptest/working/scratch/build.patch ]] + chmod +x /data/hiveptest/working/scratch/smart-apply-patch.sh + /data/hiveptest/working/scratch/smart-apply-patch.sh /data/hiveptest/working/scratch/build.patch error: patch failed: ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDF.java:267 error: repository lacks the necessary blob to fall back on 3-way merge. error: ql/src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDF.java: patch does not apply error: src/java/org/apache/hadoop/hive/ql/udf/generic/GenericUDF.java: does not exist in index error: java/org/apache/hadoop/hive/ql/udf/generic/GenericUDF.java: does not exist in index The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12903796 - PreCommit-HIVE-Build > GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters > - > > Key: HIVE-14759 > URL: https://issues.apache.org/jira/browse/HIVE-14759 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 2.1.0 >Reporter: Clemens Valiente >Assignee: Clemens Valiente >Priority: Trivial > Attachments: HIVE-14759.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > {code} > return getClass().getSimpleName().substring(10).toLowerCase(); > {code} > causes > {code} > java.lang.StringIndexOutOfBoundsException: String index out of range: -2 > at java.lang.String.substring(String.java:1875) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDF.getFuncName(GenericUDF.java:258) > {code} > if the Classname of my UDF is less than 10 characters. > this was probably to remove "GenericUDF" from the classname but causes issues > if the class doesn't start with it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-14759) GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters
[ https://issues.apache.org/jira/browse/HIVE-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Clemens Valiente updated HIVE-14759: Status: Patch Available (was: Open) > GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters > - > > Key: HIVE-14759 > URL: https://issues.apache.org/jira/browse/HIVE-14759 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 2.1.0 >Reporter: Clemens Valiente >Assignee: Clemens Valiente >Priority: Trivial > Attachments: HIVE-14759.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > {code} > return getClass().getSimpleName().substring(10).toLowerCase(); > {code} > causes > {code} > java.lang.StringIndexOutOfBoundsException: String index out of range: -2 > at java.lang.String.substring(String.java:1875) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDF.getFuncName(GenericUDF.java:258) > {code} > if the Classname of my UDF is less than 10 characters. > this was probably to remove "GenericUDF" from the classname but causes issues > if the class doesn't start with it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Updated] (HIVE-14759) GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters
[ https://issues.apache.org/jira/browse/HIVE-14759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Clemens Valiente updated HIVE-14759: Attachment: HIVE-14759.patch > GenericUDF.getFuncName breaks with UDF Classnames less than 10 characters > - > > Key: HIVE-14759 > URL: https://issues.apache.org/jira/browse/HIVE-14759 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 2.1.0 >Reporter: Clemens Valiente >Assignee: Clemens Valiente >Priority: Trivial > Attachments: HIVE-14759.patch > > Original Estimate: 1h > Remaining Estimate: 1h > > {code} > return getClass().getSimpleName().substring(10).toLowerCase(); > {code} > causes > {code} > java.lang.StringIndexOutOfBoundsException: String index out of range: -2 > at java.lang.String.substring(String.java:1875) > at > org.apache.hadoop.hive.ql.udf.generic.GenericUDF.getFuncName(GenericUDF.java:258) > {code} > if the Classname of my UDF is less than 10 characters. > this was probably to remove "GenericUDF" from the classname but causes issues > if the class doesn't start with it. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18301) Investigate to enable MapInput cache in Hive on Spark
[ https://issues.apache.org/jira/browse/HIVE-18301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304383#comment-16304383 ] Rui Li commented on HIVE-18301: --- My understanding is if the HadoopRDD is cached, the records are not produced by record reader and IOContext is not populated. Therefore the information in IOContext will be unavailable, e.g. the input path. This may cause problem because some operators need to take certain actions when input file changes -- {{Operator::cleanUpInputFileChanged}}. So basically my point is we have to figure out the scenarios where IOContext is necessary. Then decide whether we should disable caching in such cases. > Investigate to enable MapInput cache in Hive on Spark > - > > Key: HIVE-18301 > URL: https://issues.apache.org/jira/browse/HIVE-18301 > Project: Hive > Issue Type: Bug >Reporter: liyunzhang >Assignee: liyunzhang > > Before IOContext problem is found in MapTran when spark rdd cache is enabled > in HIVE-8920. > so we disabled rdd cache in MapTran at > [SparkPlanGenerator|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java#L202]. > The problem is IOContext seems not initialized correctly in the spark yarn > client/cluster mode and caused the exception like > {code} > Job aborted due to stage failure: Task 93 in stage 0.0 failed 4 times, most > recent failure: Lost task 93.3 in stage 0.0 (TID 616, bdpe48): > java.lang.RuntimeException: Error processing row: > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:165) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27) > at > org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85) > at > scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) > at org.apache.spark.scheduler.Task.run(Task.scala:85) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.AbstractMapOperator.getNominalPath(AbstractMapOperator.java:101) > at > org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:516) > at > org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1187) > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:546) > at > org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:152) > ... 12 more > Driver stacktrace: > {code} > in yarn client/cluster mode, sometimes > [ExecMapperContext#currentInputPath|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecMapperContext.java#L109] > is null when rdd cach is enabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18301) Investigate to enable MapInput cache in Hive on Spark
[ https://issues.apache.org/jira/browse/HIVE-18301?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304359#comment-16304359 ] liyunzhang commented on HIVE-18301: --- [~lirui]: {quote} My understanding is these information will be lost if the HadoopRDD is cached. {quote} You mean that HadoopRDD will not store the spark plan? If yes, actually in hive, it stores the spark plan on a file on hdfs and deserialize and serialize from the file. See more code in hive/ql/src/java/org/apache/hadoop/hive/ql/exec/Utilities.java#getBaseWork. If not, please spend more time to explain detail. Here my question is that is there any other reason to disable MapInput#cache besides avoiding multi-insert cases which there is union operator after {{from}} {code} from (select * from dec union all select * from dec2) s insert overwrite table dec3 select s.name, sum(s.value) group by s.name insert overwrite table dec4 select s.name, s.value order by s.value; {code} If there is no other reason to disable MapInput# cache, I guess for HIVE-17486, we can enable MapInput cache because HIVE-17486 is merge same single table. There is few case like above ( from (select A union B) ). > Investigate to enable MapInput cache in Hive on Spark > - > > Key: HIVE-18301 > URL: https://issues.apache.org/jira/browse/HIVE-18301 > Project: Hive > Issue Type: Bug >Reporter: liyunzhang >Assignee: liyunzhang > > Before IOContext problem is found in MapTran when spark rdd cache is enabled > in HIVE-8920. > so we disabled rdd cache in MapTran at > [SparkPlanGenerator|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/spark/SparkPlanGenerator.java#L202]. > The problem is IOContext seems not initialized correctly in the spark yarn > client/cluster mode and caused the exception like > {code} > Job aborted due to stage failure: Task 93 in stage 0.0 failed 4 times, most > recent failure: Lost task 93.3 in stage 0.0 (TID 616, bdpe48): > java.lang.RuntimeException: Error processing row: > java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:165) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:48) > at > org.apache.hadoop.hive.ql.exec.spark.HiveMapFunctionResultList.processNextRecord(HiveMapFunctionResultList.java:27) > at > org.apache.hadoop.hive.ql.exec.spark.HiveBaseFunctionResultList.hasNext(HiveBaseFunctionResultList.java:85) > at > scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42) > at > org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:125) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79) > at > org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47) > at org.apache.spark.scheduler.Task.run(Task.scala:85) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.ql.exec.AbstractMapOperator.getNominalPath(AbstractMapOperator.java:101) > at > org.apache.hadoop.hive.ql.exec.MapOperator.cleanUpInputFileChangedOp(MapOperator.java:516) > at > org.apache.hadoop.hive.ql.exec.Operator.cleanUpInputFileChanged(Operator.java:1187) > at > org.apache.hadoop.hive.ql.exec.MapOperator.process(MapOperator.java:546) > at > org.apache.hadoop.hive.ql.exec.spark.SparkMapRecordHandler.processRow(SparkMapRecordHandler.java:152) > ... 12 more > Driver stacktrace: > {code} > in yarn client/cluster mode, sometimes > [ExecMapperContext#currentInputPath|https://github.com/kellyzly/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/exec/mr/ExecMapperContext.java#L109] > is null when rdd cach is enabled. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Issue Comment Deleted] (HIVE-18341) Add repl load support for adding "raw" namespace for TDE with same encryption keys
[ https://issues.apache.org/jira/browse/HIVE-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shwetha G S updated HIVE-18341: --- Comment: was deleted (was: If distcp uses raw namespace for replication between source and target, and source and target have different encryption keys, distcp should succeed as it just reads and writes raw bytes. But read from target will probably fail or read garbled data) > Add repl load support for adding "raw" namespace for TDE with same encryption > keys > -- > > Key: HIVE-18341 > URL: https://issues.apache.org/jira/browse/HIVE-18341 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > Attachments: HIVE-18341.0.patch > > > https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#Running_as_the_superuser > "a new virtual path prefix, /.reserved/raw/, that gives superusers direct > access to the underlying block data in the filesystem. This allows superusers > to distcp data without needing having access to encryption keys, and also > avoids the overhead of decrypting and re-encrypting data." > We need to introduce a new option in "Repl Load" command that will change the > files being copied in distcp to have this "/.reserved/raw/" namespace before > the file paths. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Commented] (HIVE-18341) Add repl load support for adding "raw" namespace for TDE with same encryption keys
[ https://issues.apache.org/jira/browse/HIVE-18341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304332#comment-16304332 ] Shwetha G S commented on HIVE-18341: If distcp uses raw namespace for replication between source and target, and source and target have different encryption keys, distcp should succeed as it just reads and writes raw bytes. But read from target will probably fail or read garbled data > Add repl load support for adding "raw" namespace for TDE with same encryption > keys > -- > > Key: HIVE-18341 > URL: https://issues.apache.org/jira/browse/HIVE-18341 > Project: Hive > Issue Type: Bug >Affects Versions: 3.0.0 >Reporter: anishek >Assignee: anishek > Fix For: 3.0.0 > > Attachments: HIVE-18341.0.patch > > > https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/TransparentEncryption.html#Running_as_the_superuser > "a new virtual path prefix, /.reserved/raw/, that gives superusers direct > access to the underlying block data in the filesystem. This allows superusers > to distcp data without needing having access to encryption keys, and also > avoids the overhead of decrypting and re-encrypting data." > We need to introduce a new option in "Repl Load" command that will change the > files being copied in distcp to have this "/.reserved/raw/" namespace before > the file paths. -- This message was sent by Atlassian JIRA (v6.4.14#64029)