Re: Result of the TPC-DS benchmark on Hive master branch
Hey All! I think this might be caused by a recent feature addition which became over zealous in some situations, results in an incorrect plan in some cases. I've already fixed the issue and it will go in as part of one of the followups. You could disable the feature by changing "hive.optimize.shared.work.dppunion". cheers, Zoltan On November 5, 2020 5:15:15 AM GMT+01:00, Mustafa IMAN wrote: >Hi Sungwoo, >There is https://issues.apache.org/jira/browse/HIVE-23975 causing a >regression in runtime. There is a ticket open to fix it ( >https://issues.apache.org/jira/browse/HIVE-24139) which is still in >progress. You might want to revert 23975 before trying. > >On Wed, Nov 4, 2020 at 2:55 PM Stamatis Zampetakis >wrote: > >> Hi Sungwoo, >> >> Personally, I would be also interested to see the results of these >> experiments if they are available somewhere. >> >> I didn't understand if the queries are failing at runtime or compile >time. >> Are the above errors the only ones that you're getting? >> >> If you can reproduce the problem with a smaller dataset then I think >the >> best would be to create unit tests and JIRAS for each query >separately. >> >> It may not be worth going through the commits to find those that >caused the >> regression because it will be time-consuming and you may bump into >> something that is not trivial to revert. >> >> Best, >> Stamatis >> >> >> On Wed, Nov 4, 2020 at 7:24 PM Sungwoo Park >wrote: >> >> > Hello, >> > >> > I have tested a recent commit of the master branch using the TPC-DS >> > benchmark. I used Hive on Tez (not Hive-LLAP). The way I tested is: >> > >> > 1) create a database consisting of external tables from a 100GB >TPC-DS >> text >> > dataset >> > 2) create a database consisting of ORC tables from the previous >database >> > 3) compute column statistics >> > 4) run TPC-DS queries and check the results >> > >> > Previously we tested the commit >5f47808c02816edcd4c323dfa25194536f3f20fd >> > (HIVE-23114: Insert overwrite with dynamic partitioning is not >working >> > correctly with direct insert, Fri Apr 10), and all queries ran >okay. >> > >> > This time I used the following commits. I made a few changes to >pom.xml >> of >> > both Hive and Tez, but these changes should not affect the result >of >> > running queries. >> > >> > 1) Hive, master, 96aacdc50043fa442c2277b7629812e69241a507 (Tue Nov >> > 3), HIVE-24314: compactor.Cleaner should not set state mark cleaned >if it >> > didn't remove any files >> > 2) Tez, 0.10.0, 22fec6c0ecc7ebe6f6f28800935cc6f69794dad5 (Thu Oct >> > 8), CHANGES.txt updated with TEZ-4238 >> > >> > The result is that 14 queries (out of 99 queries) fail, and a query >fails >> > during compilation for one of the following two reasons. >> > >> > 1) >> > org.apache.hive.service.cli.HiveSQLException: Error while compiling >> > statement: FAILED: Execution Error, return code 1 from >> > org.apache.hadoop.hive.ql.exec.tez.TezTask. Edge [Map 12 : >> > org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] -> [Map 7 : >> > org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] ({ BROADCAST : >> > org.apache.tez.runtime.library.input.UnorderedKVInput >> PERSISTED >>> >> > org.apache.tez.runtime.library.output.UnorderedKVOutput >> >> NullEdgeManager >> > }) already defined! >> > at >> > >> > >> >org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:365) >> > at >> > >> > >> >org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:241) >> > at >> > >> > >> >org.apache.hive.service.cli.operation.SQLOperation.access$500(SQLOperation.java:88) >> > at >> > >> > >> >org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:325) >> > at java.security.AccessController.doPrivileged(Native Method) >> > at javax.security.auth.Subject.doAs(Subject.java:422) >> > at >> > >> > >> >org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682) >> > at >> > >> > >> >org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:343) >> > at >> > >java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) >> > at java.util.concurrent.FutureTask.run(FutureTask.java:266) >> > at >> > >> > >> >java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) >> > at >> > >> > >> >java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) >> > at java.lang.Thread.run(Thread.java:745) >> > Caused by: java.lang.IllegalArgumentException: Edge [Map 12 : >> > org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] -> [Map 7 : >> > org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] ({ BROADCAST : >> > org.apache.tez.runtime.library.input.UnorderedKVInput >> PERSISTED >>> >> > org.apache.tez.runtime.library.output.UnorderedKVOutput >> >> NullEdgeManager >> > }) already defined! >> > at org.apache.tez.dag.api.DAG.addEdge(DAG.java:297) >> > at
Re: Result of the TPC-DS benchmark on Hive master branch
Hi Sungwoo, There is https://issues.apache.org/jira/browse/HIVE-23975 causing a regression in runtime. There is a ticket open to fix it ( https://issues.apache.org/jira/browse/HIVE-24139) which is still in progress. You might want to revert 23975 before trying. On Wed, Nov 4, 2020 at 2:55 PM Stamatis Zampetakis wrote: > Hi Sungwoo, > > Personally, I would be also interested to see the results of these > experiments if they are available somewhere. > > I didn't understand if the queries are failing at runtime or compile time. > Are the above errors the only ones that you're getting? > > If you can reproduce the problem with a smaller dataset then I think the > best would be to create unit tests and JIRAS for each query separately. > > It may not be worth going through the commits to find those that caused the > regression because it will be time-consuming and you may bump into > something that is not trivial to revert. > > Best, > Stamatis > > > On Wed, Nov 4, 2020 at 7:24 PM Sungwoo Park wrote: > > > Hello, > > > > I have tested a recent commit of the master branch using the TPC-DS > > benchmark. I used Hive on Tez (not Hive-LLAP). The way I tested is: > > > > 1) create a database consisting of external tables from a 100GB TPC-DS > text > > dataset > > 2) create a database consisting of ORC tables from the previous database > > 3) compute column statistics > > 4) run TPC-DS queries and check the results > > > > Previously we tested the commit 5f47808c02816edcd4c323dfa25194536f3f20fd > > (HIVE-23114: Insert overwrite with dynamic partitioning is not working > > correctly with direct insert, Fri Apr 10), and all queries ran okay. > > > > This time I used the following commits. I made a few changes to pom.xml > of > > both Hive and Tez, but these changes should not affect the result of > > running queries. > > > > 1) Hive, master, 96aacdc50043fa442c2277b7629812e69241a507 (Tue Nov > > 3), HIVE-24314: compactor.Cleaner should not set state mark cleaned if it > > didn't remove any files > > 2) Tez, 0.10.0, 22fec6c0ecc7ebe6f6f28800935cc6f69794dad5 (Thu Oct > > 8), CHANGES.txt updated with TEZ-4238 > > > > The result is that 14 queries (out of 99 queries) fail, and a query fails > > during compilation for one of the following two reasons. > > > > 1) > > org.apache.hive.service.cli.HiveSQLException: Error while compiling > > statement: FAILED: Execution Error, return code 1 from > > org.apache.hadoop.hive.ql.exec.tez.TezTask. Edge [Map 12 : > > org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] -> [Map 7 : > > org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] ({ BROADCAST : > > org.apache.tez.runtime.library.input.UnorderedKVInput >> PERSISTED >> > > org.apache.tez.runtime.library.output.UnorderedKVOutput >> > NullEdgeManager > > }) already defined! > > at > > > > > org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:365) > > at > > > > > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:241) > > at > > > > > org.apache.hive.service.cli.operation.SQLOperation.access$500(SQLOperation.java:88) > > at > > > > > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:325) > > at java.security.AccessController.doPrivileged(Native Method) > > at javax.security.auth.Subject.doAs(Subject.java:422) > > at > > > > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682) > > at > > > > > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:343) > > at > > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > > at > > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > > at > > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > > at java.lang.Thread.run(Thread.java:745) > > Caused by: java.lang.IllegalArgumentException: Edge [Map 12 : > > org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] -> [Map 7 : > > org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] ({ BROADCAST : > > org.apache.tez.runtime.library.input.UnorderedKVInput >> PERSISTED >> > > org.apache.tez.runtime.library.output.UnorderedKVOutput >> > NullEdgeManager > > }) already defined! > > at org.apache.tez.dag.api.DAG.addEdge(DAG.java:297) > > at org.apache.hadoop.hive.ql.exec.tez.TezTask.build(TezTask.java:519) > > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:213) > > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) > > at > > > > > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) > > at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:361) > > at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:334) > > at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:245) > > at
Does Hive need to rely on Hadoop to deploy remote MetStore?
Hey, Does Hive need to rely on Hadoop to deploy remote MetStore? Thank you,
Re: Result of the TPC-DS benchmark on Hive master branch
Hi Sungwoo, Personally, I would be also interested to see the results of these experiments if they are available somewhere. I didn't understand if the queries are failing at runtime or compile time. Are the above errors the only ones that you're getting? If you can reproduce the problem with a smaller dataset then I think the best would be to create unit tests and JIRAS for each query separately. It may not be worth going through the commits to find those that caused the regression because it will be time-consuming and you may bump into something that is not trivial to revert. Best, Stamatis On Wed, Nov 4, 2020 at 7:24 PM Sungwoo Park wrote: > Hello, > > I have tested a recent commit of the master branch using the TPC-DS > benchmark. I used Hive on Tez (not Hive-LLAP). The way I tested is: > > 1) create a database consisting of external tables from a 100GB TPC-DS text > dataset > 2) create a database consisting of ORC tables from the previous database > 3) compute column statistics > 4) run TPC-DS queries and check the results > > Previously we tested the commit 5f47808c02816edcd4c323dfa25194536f3f20fd > (HIVE-23114: Insert overwrite with dynamic partitioning is not working > correctly with direct insert, Fri Apr 10), and all queries ran okay. > > This time I used the following commits. I made a few changes to pom.xml of > both Hive and Tez, but these changes should not affect the result of > running queries. > > 1) Hive, master, 96aacdc50043fa442c2277b7629812e69241a507 (Tue Nov > 3), HIVE-24314: compactor.Cleaner should not set state mark cleaned if it > didn't remove any files > 2) Tez, 0.10.0, 22fec6c0ecc7ebe6f6f28800935cc6f69794dad5 (Thu Oct > 8), CHANGES.txt updated with TEZ-4238 > > The result is that 14 queries (out of 99 queries) fail, and a query fails > during compilation for one of the following two reasons. > > 1) > org.apache.hive.service.cli.HiveSQLException: Error while compiling > statement: FAILED: Execution Error, return code 1 from > org.apache.hadoop.hive.ql.exec.tez.TezTask. Edge [Map 12 : > org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] -> [Map 7 : > org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] ({ BROADCAST : > org.apache.tez.runtime.library.input.UnorderedKVInput >> PERSISTED >> > org.apache.tez.runtime.library.output.UnorderedKVOutput >> NullEdgeManager > }) already defined! > at > > org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:365) > at > > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:241) > at > > org.apache.hive.service.cli.operation.SQLOperation.access$500(SQLOperation.java:88) > at > > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:325) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682) > at > > org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:343) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.lang.IllegalArgumentException: Edge [Map 12 : > org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] -> [Map 7 : > org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] ({ BROADCAST : > org.apache.tez.runtime.library.input.UnorderedKVInput >> PERSISTED >> > org.apache.tez.runtime.library.output.UnorderedKVOutput >> NullEdgeManager > }) already defined! > at org.apache.tez.dag.api.DAG.addEdge(DAG.java:297) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.build(TezTask.java:519) > at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:213) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) > at > > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) > at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:361) > at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:334) > at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:245) > at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:108) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:326) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:149) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:144) > at > org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:164) > at > > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:228) > ... 11 more > > 2) > Caused by: java.lang.NullPointerException > at > >
Result of the TPC-DS benchmark on Hive master branch
Hello, I have tested a recent commit of the master branch using the TPC-DS benchmark. I used Hive on Tez (not Hive-LLAP). The way I tested is: 1) create a database consisting of external tables from a 100GB TPC-DS text dataset 2) create a database consisting of ORC tables from the previous database 3) compute column statistics 4) run TPC-DS queries and check the results Previously we tested the commit 5f47808c02816edcd4c323dfa25194536f3f20fd (HIVE-23114: Insert overwrite with dynamic partitioning is not working correctly with direct insert, Fri Apr 10), and all queries ran okay. This time I used the following commits. I made a few changes to pom.xml of both Hive and Tez, but these changes should not affect the result of running queries. 1) Hive, master, 96aacdc50043fa442c2277b7629812e69241a507 (Tue Nov 3), HIVE-24314: compactor.Cleaner should not set state mark cleaned if it didn't remove any files 2) Tez, 0.10.0, 22fec6c0ecc7ebe6f6f28800935cc6f69794dad5 (Thu Oct 8), CHANGES.txt updated with TEZ-4238 The result is that 14 queries (out of 99 queries) fail, and a query fails during compilation for one of the following two reasons. 1) org.apache.hive.service.cli.HiveSQLException: Error while compiling statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Edge [Map 12 : org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] -> [Map 7 : org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] ({ BROADCAST : org.apache.tez.runtime.library.input.UnorderedKVInput >> PERSISTED >> org.apache.tez.runtime.library.output.UnorderedKVOutput >> NullEdgeManager }) already defined! at org.apache.hive.service.cli.operation.Operation.toSQLException(Operation.java:365) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:241) at org.apache.hive.service.cli.operation.SQLOperation.access$500(SQLOperation.java:88) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork$1.run(SQLOperation.java:325) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1682) at org.apache.hive.service.cli.operation.SQLOperation$BackgroundWork.run(SQLOperation.java:343) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.IllegalArgumentException: Edge [Map 12 : org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] -> [Map 7 : org.apache.hadoop.hive.ql.exec.tez.MapTezProcessor] ({ BROADCAST : org.apache.tez.runtime.library.input.UnorderedKVInput >> PERSISTED >> org.apache.tez.runtime.library.output.UnorderedKVOutput >> NullEdgeManager }) already defined! at org.apache.tez.dag.api.DAG.addEdge(DAG.java:297) at org.apache.hadoop.hive.ql.exec.tez.TezTask.build(TezTask.java:519) at org.apache.hadoop.hive.ql.exec.tez.TezTask.execute(TezTask.java:213) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:213) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:105) at org.apache.hadoop.hive.ql.Executor.launchTask(Executor.java:361) at org.apache.hadoop.hive.ql.Executor.launchTasks(Executor.java:334) at org.apache.hadoop.hive.ql.Executor.runTasks(Executor.java:245) at org.apache.hadoop.hive.ql.Executor.execute(Executor.java:108) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:326) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:149) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:144) at org.apache.hadoop.hive.ql.reexec.ReExecDriver.run(ReExecDriver.java:164) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:228) ... 11 more 2) Caused by: java.lang.NullPointerException at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:4491) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genSelectPlan(SemanticAnalyzer.java:4474) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPostGroupByBodyPlan(SemanticAnalyzer.java:10940) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genBodyPlan(SemanticAnalyzer.java:10882) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11776) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11633) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11660) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11633) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11660) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.genPlan(SemanticAnalyzer.java:11646) at
[jira] [Created] (HIVE-24359) Hive Compaction hangs because of doAs when worker set to HS2
Chiran Ravani created HIVE-24359: Summary: Hive Compaction hangs because of doAs when worker set to HS2 Key: HIVE-24359 URL: https://issues.apache.org/jira/browse/HIVE-24359 Project: Hive Issue Type: Bug Components: HiveServer2, Transactions Reporter: Chiran Ravani When creating a managed table and inserting data using Impala, with compaction worker set to HiveServer2 - in secured environment (Kerberized Cluster). Worker thread hangs indefinitely expecting user to provide kerberos credentials from STDIN The problem appears to be because of no login context being sent from HS2 to HMS as part of QueryCompactor and HS2 JVM has property javax.security.auth.useSubjectCredsOnly is set to false. Which is causing it to prompt for logins via stdin, however setting to true also does not helo as the context does not seem to be passed in any case. Below is observed in HS2 Jstack. If you see the the thread is waiting for stdin "com.sun.security.auth.module.Krb5LoginModule.promptForName" {code} "c570-node2.abc.host.com-44_executor" #47 daemon prio=1 os_prio=0 tid=0x01506000 nid=0x1348 runnable [0x7f1beea95000] java.lang.Thread.State: RUNNABLE at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:255) at java.io.BufferedInputStream.read1(BufferedInputStream.java:284) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) - locked <0x9fa38c90> (a java.io.BufferedInputStream) at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:284) at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:326) at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:178) - locked <0x8c7d5010> (a java.io.InputStreamReader) at java.io.InputStreamReader.read(InputStreamReader.java:184) at java.io.BufferedReader.fill(BufferedReader.java:161) at java.io.BufferedReader.readLine(BufferedReader.java:324) - locked <0x8c7d5010> (a java.io.InputStreamReader) at java.io.BufferedReader.readLine(BufferedReader.java:389) at com.sun.security.auth.callback.TextCallbackHandler.readLine(TextCallbackHandler.java:153) at com.sun.security.auth.callback.TextCallbackHandler.handle(TextCallbackHandler.java:120) at com.sun.security.auth.module.Krb5LoginModule.promptForName(Krb5LoginModule.java:862) at com.sun.security.auth.module.Krb5LoginModule.attemptAuthentication(Krb5LoginModule.java:708) at com.sun.security.auth.module.Krb5LoginModule.login(Krb5LoginModule.java:617) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at javax.security.auth.login.LoginContext.invoke(LoginContext.java:755) at javax.security.auth.login.LoginContext.access$000(LoginContext.java:195) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:682) at javax.security.auth.login.LoginContext$4.run(LoginContext.java:680) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.login.LoginContext.invokePriv(LoginContext.java:680) at javax.security.auth.login.LoginContext.login(LoginContext.java:587) at sun.security.jgss.GSSUtil.login(GSSUtil.java:258) at sun.security.jgss.krb5.Krb5Util.getInitialTicket(Krb5Util.java:175) at sun.security.jgss.krb5.Krb5InitCredential$1.run(Krb5InitCredential.java:341) at sun.security.jgss.krb5.Krb5InitCredential$1.run(Krb5InitCredential.java:337) at java.security.AccessController.doPrivileged(Native Method) at sun.security.jgss.krb5.Krb5InitCredential.getTgt(Krb5InitCredential.java:336) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:146) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:189) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) at
[jira] [Created] (HIVE-24358) Some tasks should set exception on failures
Zhihua Deng created HIVE-24358: -- Summary: Some tasks should set exception on failures Key: HIVE-24358 URL: https://issues.apache.org/jira/browse/HIVE-24358 Project: Hive Issue Type: Improvement Components: HiveServer2 Reporter: Zhihua Deng Some tasks miss setting exception on failures. This information is useful for beeline users figuring out the problem and the configured failure hooks. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24357) Exchange SWO table/algorithm strategy
Zoltan Haindrich created HIVE-24357: --- Summary: Exchange SWO table/algorithm strategy Key: HIVE-24357 URL: https://issues.apache.org/jira/browse/HIVE-24357 Project: Hive Issue Type: Improvement Reporter: Zoltan Haindrich SWO right now runs like: {code} for every strategy s: for every table t: try s for t {code} this results in that an earlier startegy may create a more entangled operator tree behind - in case its able to merge for a less prioritized table it would probably make more sense to do: {code} for every table t: for every strategy s: try s for t {code} -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24356) EXPLAIN with collect_list() throw IllegalArgumentException
Mulan created HIVE-24356: Summary: EXPLAIN with collect_list() throw IllegalArgumentException Key: HIVE-24356 URL: https://issues.apache.org/jira/browse/HIVE-24356 Project: Hive Issue Type: Bug Affects Versions: 2.3.6 Reporter: Mulan {quote}EXPLAIN with t2 as ( select array(1,2) as c1 union all select array(2,3) as c1 ) select collect_list(c1) from t2;{quote} FAILED: IllegalArgumentException Size requested for unknown type: java.util.Collection no using EXPLAIN is okay. -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24355) Partition doesn't have hashCode/equals
Zoltan Haindrich created HIVE-24355: --- Summary: Partition doesn't have hashCode/equals Key: HIVE-24355 URL: https://issues.apache.org/jira/browse/HIVE-24355 Project: Hive Issue Type: Bug Reporter: Zoltan Haindrich Assignee: Zoltan Haindrich this might cause some issues - it also prevents the SWO from merging TS operators which have partitions in the "pruned list" -- This message was sent by Atlassian Jira (v8.3.4#803005)
[jira] [Created] (HIVE-24354) ColumnVector should declare abstract convenience methods for getting values
László Bodor created HIVE-24354: --- Summary: ColumnVector should declare abstract convenience methods for getting values Key: HIVE-24354 URL: https://issues.apache.org/jira/browse/HIVE-24354 Project: Hive Issue Type: Improvement Reporter: László Bodor -- This message was sent by Atlassian Jira (v8.3.4#803005)