[jira] [Commented] (HIVE-14129) Execute move tasks in parallel
[ https://issues.apache.org/jira/browse/HIVE-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369432#comment-15369432 ] Hive QA commented on HIVE-14129: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816995/HIVE-14129.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/451/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/451/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-451/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n /usr/java/jdk1.8.0_25 ]] + export JAVA_HOME=/usr/java/jdk1.8.0_25 + JAVA_HOME=/usr/java/jdk1.8.0_25 + export PATH=/usr/java/jdk1.8.0_25/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/java/jdk1.8.0_25/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-MASTER-Build-451/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + cd apache-github-source-source + git fetch origin + git reset --hard HEAD HEAD is now at 82b84ac HIVE-14173: NPE was thrown after enabling directsql in the middle of session (Chaoyu Tang, reviewed by Sergey Shelukhin) + git clean -f -d + git checkout master Already on 'master' Your branch is up-to-date with 'origin/master'. + git reset --hard origin/master HEAD is now at 82b84ac HIVE-14173: NPE was thrown after enabling directsql in the middle of session (Chaoyu Tang, reviewed by Sergey Shelukhin) + git merge --ff-only origin/master Already up-to-date. + git gc + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12816995 - PreCommit-HIVE-MASTER-Build > Execute move tasks in parallel > -- > > Key: HIVE-14129 > URL: https://issues.apache.org/jira/browse/HIVE-14129 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Attachments: HIVE-14129.patch, HIVE-14129.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14175) Fix creating buckets without scheme information
[ https://issues.apache.org/jira/browse/HIVE-14175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369431#comment-15369431 ] Hive QA commented on HIVE-14175: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816992/HIVE-14175.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/450/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/450/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-450/ Messages: {noformat} This message was trimmed, see log for full details main: [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/aggregates/gen [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/ql/target/generated-test-sources/java/org/apache/hadoop/hive/ql/exec/vector/expressions/gen Generating vector expression code Generating vector expression test code [INFO] Executed tasks [INFO] [INFO] --- build-helper-maven-plugin:1.8:add-source (add-source) @ hive-exec --- [INFO] Source directory: /data/hive-ptest/working/apache-github-source-source/ql/src/gen/thrift/gen-javabean added. [INFO] Source directory: /data/hive-ptest/working/apache-github-source-source/ql/target/generated-sources/java added. [INFO] [INFO] --- antlr3-maven-plugin:3.4:antlr (default) @ hive-exec --- [INFO] ANTLR: Processing source directory /data/hive-ptest/working/apache-github-source-source/ql/src/java ANTLR Parser Generator Version 3.4 org/apache/hadoop/hive/ql/parse/HiveLexer.g org/apache/hadoop/hive/ql/parse/HiveParser.g [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hive-exec --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hive-exec --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 4 resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (define-classpath) @ hive-exec --- [INFO] Executing tasks main: [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hive-exec --- [INFO] Compiling 2649 source files to /data/hive-ptest/working/apache-github-source-source/ql/target/classes [WARNING] /data/hive-ptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableDummyOperator.java: Some input files use or override a deprecated API. [WARNING] /data/hive-ptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/exec/HashTableDummyOperator.java: Recompile with -Xlint:deprecation for details. [WARNING] /data/hive-ptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/exec/RowSchema.java: Some input files use unchecked or unsafe operations. [WARNING] /data/hive-ptest/working/apache-github-source-source/ql/src/java/org/apache/hadoop/hive/ql/exec/RowSchema.java: Recompile with -Xlint:unchecked for details. [INFO] [INFO] --- build-helper-maven-plugin:1.8:add-test-source (add-test-sources) @ hive-exec --- [INFO] Test Source directory: /data/hive-ptest/working/apache-github-source-source/ql/target/generated-test-sources/java added. [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hive-exec --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 5 resources [INFO] Copying 3 resources [INFO] [INFO] --- maven-antrun-plugin:1.7:run (setup-test-dirs) @ hive-exec --- [INFO] Executing tasks main: [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/ql/target/tmp [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/ql/target/warehouse [mkdir] Created dir: /data/hive-ptest/working/apache-github-source-source/ql/target/tmp/conf [copy] Copying 15 files to /data/hive-ptest/working/apache-github-source-source/ql/target/tmp/conf [INFO] Executed tasks [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hive-exec --- [INFO] Compiling 292 source files to /data/hive-ptest/working/apache-github-source-source/ql/target/test-classes [INFO] - [ERROR] COMPILATION ERROR : [INFO] - [ERROR] /data/hive-ptest/working/apache-github-source-source/ql/src/test/org/apache/hadoop/hive/ql/exec/TestUtilities.java:[189,54] ';' expected [ERROR]
[jira] [Commented] (HIVE-14196) Disable LLAP IO when complex types are involved
[ https://issues.apache.org/jira/browse/HIVE-14196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369428#comment-15369428 ] Hive QA commented on HIVE-14196: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816982/HIVE-14196.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10296 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/449/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/449/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-449/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 7 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12816982 - PreCommit-HIVE-MASTER-Build > Disable LLAP IO when complex types are involved > --- > > Key: HIVE-14196 > URL: https://issues.apache.org/jira/browse/HIVE-14196 > Project: Hive > Issue Type: Sub-task >Affects Versions: 2.1.0, 2.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-14196.1.patch > > > Let's exclude vector_complex_* tests added for llap which is currently broken > and fails in all test runs. We can re-enable it with HIVE-14089 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-14152) datanucleus.autoStartMechanismMode should set to 'Ignored' to allow rolling downgrade
[ https://issues.apache.org/jira/browse/HIVE-14152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Thejas M Nair reassigned HIVE-14152: Assignee: Thejas M Nair (was: Daniel Dai) > datanucleus.autoStartMechanismMode should set to 'Ignored' to allow rolling > downgrade > -- > > Key: HIVE-14152 > URL: https://issues.apache.org/jira/browse/HIVE-14152 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Daniel Dai >Assignee: Thejas M Nair > Attachments: HIVE-14152.1.patch > > > We see the following issue when downgrading metastore: > 1. Run some query using new tables > 2. Downgrade metastore > 3. Restart metastore will complain the new table does not exist > In particular, constaints tables does not exist in branch-1. If we run Hive 2 > and create a constraint, then downgrade metastore to Hive 1, datanucleus will > complain: > {code} > javax.jdo.JDOFatalUserException: Error starting up DataNucleus : a class > "org.apache.hadoop.hive.metastore.model.MConstraint" was listed as being > persisted previously in this datastore, yet the class wasnt found. Perhaps it > is used by a different DataNucleus-enabled application in this datastore, or > you have changed your class names. > at > org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:528) > at > org.datanucleus.api.jdo.JDOPersistenceManagerFactory.freezeConfiguration(JDOPersistenceManagerFactory.java:788) > at > org.datanucleus.api.jdo.JDOPersistenceManagerFactory.createPersistenceManagerFactory(JDOPersistenceManagerFactory.java:333) > at > org.datanucleus.api.jdo.JDOPersistenceManagerFactory.getPersistenceManagerFactory(JDOPersistenceManagerFactory.java:202) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at javax.jdo.JDOHelper$16.run(JDOHelper.java:1965) > at java.security.AccessController.doPrivileged(Native Method) > at javax.jdo.JDOHelper.invoke(JDOHelper.java:1960) > at > javax.jdo.JDOHelper.invokeGetPersistenceManagerFactoryOnImplementation(JDOHelper.java:1166) > at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:808) > at javax.jdo.JDOHelper.getPersistenceManagerFactory(JDOHelper.java:701) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPMF(ObjectStore.java:377) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPersistenceManager(ObjectStore.java:406) > at > org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:299) > at > org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:266) > at > org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76) > at > org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.(RawStoreProxy.java:60) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:69) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:650) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:628) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:677) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:484) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.(RetryingHMSHandler.java:77) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:83) > at > org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5905) > at > org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5900) > at > org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:6159) > at > org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:6084) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.util.RunJar.run(RunJar.java:221) > at org.apache.hadoop.util.RunJar.main(RunJar.java:136) > {code} > Apparently datanucleus
[jira] [Commented] (HIVE-14027) NULL values produced by left outer join do not behave as NULL
[ https://issues.apache.org/jira/browse/HIVE-14027?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369408#comment-15369408 ] Hive QA commented on HIVE-14027: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816954/HIVE-14027.02.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10297 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_join org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/448/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/448/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-448/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12816954 - PreCommit-HIVE-MASTER-Build > NULL values produced by left outer join do not behave as NULL > - > > Key: HIVE-14027 > URL: https://issues.apache.org/jira/browse/HIVE-14027 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 1.2.1, 2.2.0 >Reporter: Vaibhav Gumashta >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14027.01.patch, HIVE-14027.02.patch, > HIVE-14027.patch > > > Consider the following setup: > {code} > create table tbl (n bigint, t string); > insert into tbl values (1, 'one'); > insert into tbl values(2, 'two'); > select a.n, a.t, isnull(b.n), isnull(b.t) from (select * from tbl where n = > 1) a left outer join (select * from tbl where 1 = 2) b on a.n = b.n; > 1onefalsetrue > {code} > The query should return true for isnull(b.n). > I've tested by inserting a row with null value for the bigint column into > tbl, and isnull returns true in that case. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14173) NPE was thrown after enabling directsql in the middle of session
[ https://issues.apache.org/jira/browse/HIVE-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chaoyu Tang updated HIVE-14173: --- Resolution: Fixed Fix Version/s: 2.1.1 2.2.0 Status: Resolved (was: Patch Available) Committed to 2.2.0 and 2.1.1. Thanks [~sershe] for review. > NPE was thrown after enabling directsql in the middle of session > > > Key: HIVE-14173 > URL: https://issues.apache.org/jira/browse/HIVE-14173 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Fix For: 2.2.0, 2.1.1 > > Attachments: HIVE-14173.patch, HIVE-14173.patch, HIVE-14173.patch > > > hive.metastore.try.direct.sql is initially set to false in HMS hive-site.xml, > then changed to true using set metaconf command in the middle of a session, > running a query will be thrown NPE with error message is as following: > {code} > 2016-07-06T17:44:41,489 ERROR [pool-5-thread-2]: metastore.RetryingHMSHandler > (RetryingHMSHandler.java:invokeInternal(192)) - > MetaException(message:java.lang.NullPointerException) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:5741) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.rethrowException(HiveMetaStore.java:4771) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_expr(HiveMetaStore.java:4754) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) > at com.sun.proxy.$Proxy18.get_partitions_by_expr(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_by_expr.getResult(ThriftHiveMetastore.java:12048) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_by_expr.getResult(ThriftHiveMetastore.java:12032) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.(ObjectStore.java:2667) > at > org.apache.hadoop.hive.metastore.ObjectStore$GetListHelper.(ObjectStore.java:2825) > at > org.apache.hadoop.hive.metastore.ObjectStore$4.(ObjectStore.java:2410) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByExprInternal(ObjectStore.java:2410) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByExpr(ObjectStore.java:2400) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) > at com.sun.proxy.$Proxy17.getPartitionsByExpr(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_expr(HiveMetaStore.java:4749) > ... 20 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14128) Parallelize jobClose phases
[ https://issues.apache.org/jira/browse/HIVE-14128?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369388#comment-15369388 ] Hive QA commented on HIVE-14128: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816951/HIVE-14128.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 76 failed/errored test(s), 10296 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autoColumnStats_1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autoColumnStats_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autoColumnStats_6 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_autoColumnStats_8 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_merge org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_opt_vectorization org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_dynpart_sort_optimization org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_escape1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_escape2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_insert_into_with_schema org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_5 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_6 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_7 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_8 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part10 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part11 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part6 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_load_dyn_part8 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lock3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lock4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge_dynamic_partition org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge_dynamic_partition2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge_dynamic_partition4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_merge_dynamic_partition5 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge10 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_orc_merge2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_rcfile_merge2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_dynamic_partition_pruning org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_join org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vectorized_dynamic_partition_pruning org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_auto_sortmerge_join_16 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_infer_bucket_sort_num_buckets org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge1 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge2 org.apache.hadoop.hive.cli.TestMiniSparkOnYarnCliDriver.testCliDriver_orc_merge_diff_fs org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_auto_sortmerge_join_16 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynamic_partition_pruning org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_opt_vectorization org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_dynpart_sort_optimization org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_load_dyn_part1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_load_dyn_part2 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_load_dyn_part3 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_merge1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_merge10 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_orc_merge2
[jira] [Commented] (HIVE-14173) NPE was thrown after enabling directsql in the middle of session
[ https://issues.apache.org/jira/browse/HIVE-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369363#comment-15369363 ] Chaoyu Tang commented on HIVE-14173: The failed tests are not aged and not related to this patch > NPE was thrown after enabling directsql in the middle of session > > > Key: HIVE-14173 > URL: https://issues.apache.org/jira/browse/HIVE-14173 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Attachments: HIVE-14173.patch, HIVE-14173.patch, HIVE-14173.patch > > > hive.metastore.try.direct.sql is initially set to false in HMS hive-site.xml, > then changed to true using set metaconf command in the middle of a session, > running a query will be thrown NPE with error message is as following: > {code} > 2016-07-06T17:44:41,489 ERROR [pool-5-thread-2]: metastore.RetryingHMSHandler > (RetryingHMSHandler.java:invokeInternal(192)) - > MetaException(message:java.lang.NullPointerException) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:5741) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.rethrowException(HiveMetaStore.java:4771) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_expr(HiveMetaStore.java:4754) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) > at com.sun.proxy.$Proxy18.get_partitions_by_expr(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_by_expr.getResult(ThriftHiveMetastore.java:12048) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_by_expr.getResult(ThriftHiveMetastore.java:12032) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:744) > Caused by: java.lang.NullPointerException > at > org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.(ObjectStore.java:2667) > at > org.apache.hadoop.hive.metastore.ObjectStore$GetListHelper.(ObjectStore.java:2825) > at > org.apache.hadoop.hive.metastore.ObjectStore$4.(ObjectStore.java:2410) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByExprInternal(ObjectStore.java:2410) > at > org.apache.hadoop.hive.metastore.ObjectStore.getPartitionsByExpr(ObjectStore.java:2400) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101) > at com.sun.proxy.$Proxy17.getPartitionsByExpr(Unknown Source) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_expr(HiveMetaStore.java:4749) > ... 20 more > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14200) Tez: disable auto-reducer parallelism when reducer-count * min.partition.factor < 1.0
[ https://issues.apache.org/jira/browse/HIVE-14200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369360#comment-15369360 ] Hive QA commented on HIVE-14200: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816962/HIVE-14200.3.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10296 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_join org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/446/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/446/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-446/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 7 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12816962 - PreCommit-HIVE-MASTER-Build > Tez: disable auto-reducer parallelism when reducer-count * > min.partition.factor < 1.0 > - > > Key: HIVE-14200 > URL: https://issues.apache.org/jira/browse/HIVE-14200 > Project: Hive > Issue Type: Bug >Reporter: Gopal V >Assignee: Gopal V > Attachments: HIVE-14200.1.patch, HIVE-14200.2.patch, > HIVE-14200.3.patch > > > The min/max factors offer no real improvement when the fractions are > meaningless, for example when 0.25 * 2 is applied as the min. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14094) Remove unused function closeFs from Warehouse.java
[ https://issues.apache.org/jira/browse/HIVE-14094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369359#comment-15369359 ] zhihai xu commented on HIVE-14094: -- Thanks [~csun] and [~ashutoshc] for the review! > Remove unused function closeFs from Warehouse.java > -- > > Key: HIVE-14094 > URL: https://issues.apache.org/jira/browse/HIVE-14094 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: zhihai xu >Assignee: zhihai xu >Priority: Trivial > Fix For: 2.2.0 > > Attachments: HIVE-14094.000.patch > > > Remove unused function closeFs from Warehouse.java > after HIVE-10922, no one will call Warehouse.closeFs. It will be good to > delete this function to prevent people from using it. Normally closing > FileSystem is not safe because most of the time FileSystem will be shared. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14004) Minor compaction produces ArrayIndexOutOfBoundsException: 7 in SchemaEvolution.getFileType
[ https://issues.apache.org/jira/browse/HIVE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-14004: Status: Patch Available (was: In Progress) > Minor compaction produces ArrayIndexOutOfBoundsException: 7 in > SchemaEvolution.getFileType > -- > > Key: HIVE-14004 > URL: https://issues.apache.org/jira/browse/HIVE-14004 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.2.0 >Reporter: Eugene Koifman >Assignee: Matt McCline > Attachments: HIVE-14004.01.patch, HIVE-14004.02.patch > > > Easiest way to repro is to add TestTxnCommands2 > {noformat} > @Test > public void testCompactWithDelete() throws Exception { > int[][] tableData = {{1,2},{3,4}}; > runStatementOnDriver("insert into " + Table.ACIDTBL + "(a,b) " + > makeValuesClause(tableData)); > runStatementOnDriver("alter table "+ Table.ACIDTBL + " compact 'MAJOR'"); > Worker t = new Worker(); > t.setThreadId((int) t.getId()); > t.setHiveConf(hiveConf); > AtomicBoolean stop = new AtomicBoolean(); > AtomicBoolean looped = new AtomicBoolean(); > stop.set(true); > t.init(stop, looped); > t.run(); > runStatementOnDriver("delete from " + Table.ACIDTBL + " where b = 4"); > runStatementOnDriver("update " + Table.ACIDTBL + " set b = -2 where b = > 2"); > runStatementOnDriver("alter table "+ Table.ACIDTBL + " compact 'MINOR'"); > t.run(); > } > {noformat} > to TestTxnCommands2 and run it. > Test won't fail but if you look > in target/tmp/log/hive.log for the following exception (from Minor > compaction). > {noformat} > 2016-06-09T18:36:39,071 WARN [Thread-190[]]: mapred.LocalJobRunner > (LocalJobRunner.java:run(560)) - job_local1233973168_0005 > java.lang.Exception: java.lang.ArrayIndexOutOfBoundsException: 7 > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) > ~[hadoop-mapreduce-client-common-2.6.1.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) > [hadoop-mapreduce-client-common-2.6.1.jar:?] > Caused by: java.lang.ArrayIndexOutOfBoundsException: 7 > at > org.apache.orc.impl.SchemaEvolution.getFileType(SchemaEvolution.java:67) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2031) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1716) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2077) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1716) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2077) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.RecordReaderImpl.(RecordReaderImpl.java:208) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:63) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:365) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:207) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:508) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:1977) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:630) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:609) > ~[classes/:?] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-2.6.1.jar:?] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450) > ~[hadoop-mapreduce-client-core-2.6.1.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) > ~[hadoop-mapreduce-client-core-2.6.1.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) > ~[hadoop-mapreduce-client-common-2.6.1.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[?:1.7.0_71] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > ~[?:1.7.0_71] > at >
[jira] [Updated] (HIVE-14004) Minor compaction produces ArrayIndexOutOfBoundsException: 7 in SchemaEvolution.getFileType
[ https://issues.apache.org/jira/browse/HIVE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-14004: Attachment: HIVE-14004.02.patch > Minor compaction produces ArrayIndexOutOfBoundsException: 7 in > SchemaEvolution.getFileType > -- > > Key: HIVE-14004 > URL: https://issues.apache.org/jira/browse/HIVE-14004 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.2.0 >Reporter: Eugene Koifman >Assignee: Matt McCline > Attachments: HIVE-14004.01.patch, HIVE-14004.02.patch > > > Easiest way to repro is to add TestTxnCommands2 > {noformat} > @Test > public void testCompactWithDelete() throws Exception { > int[][] tableData = {{1,2},{3,4}}; > runStatementOnDriver("insert into " + Table.ACIDTBL + "(a,b) " + > makeValuesClause(tableData)); > runStatementOnDriver("alter table "+ Table.ACIDTBL + " compact 'MAJOR'"); > Worker t = new Worker(); > t.setThreadId((int) t.getId()); > t.setHiveConf(hiveConf); > AtomicBoolean stop = new AtomicBoolean(); > AtomicBoolean looped = new AtomicBoolean(); > stop.set(true); > t.init(stop, looped); > t.run(); > runStatementOnDriver("delete from " + Table.ACIDTBL + " where b = 4"); > runStatementOnDriver("update " + Table.ACIDTBL + " set b = -2 where b = > 2"); > runStatementOnDriver("alter table "+ Table.ACIDTBL + " compact 'MINOR'"); > t.run(); > } > {noformat} > to TestTxnCommands2 and run it. > Test won't fail but if you look > in target/tmp/log/hive.log for the following exception (from Minor > compaction). > {noformat} > 2016-06-09T18:36:39,071 WARN [Thread-190[]]: mapred.LocalJobRunner > (LocalJobRunner.java:run(560)) - job_local1233973168_0005 > java.lang.Exception: java.lang.ArrayIndexOutOfBoundsException: 7 > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) > ~[hadoop-mapreduce-client-common-2.6.1.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) > [hadoop-mapreduce-client-common-2.6.1.jar:?] > Caused by: java.lang.ArrayIndexOutOfBoundsException: 7 > at > org.apache.orc.impl.SchemaEvolution.getFileType(SchemaEvolution.java:67) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2031) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1716) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2077) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1716) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2077) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.RecordReaderImpl.(RecordReaderImpl.java:208) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:63) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:365) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:207) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:508) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:1977) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:630) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:609) > ~[classes/:?] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-2.6.1.jar:?] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450) > ~[hadoop-mapreduce-client-core-2.6.1.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) > ~[hadoop-mapreduce-client-core-2.6.1.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) > ~[hadoop-mapreduce-client-common-2.6.1.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[?:1.7.0_71] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > ~[?:1.7.0_71] > at >
[jira] [Commented] (HIVE-14004) Minor compaction produces ArrayIndexOutOfBoundsException: 7 in SchemaEvolution.getFileType
[ https://issues.apache.org/jira/browse/HIVE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369350#comment-15369350 ] Matt McCline commented on HIVE-14004: - Begin too strict on what can not use logical schema caused acid_table_stats.q to get exceptions and suppress statistics. So, backed off on that in patch #2. > Minor compaction produces ArrayIndexOutOfBoundsException: 7 in > SchemaEvolution.getFileType > -- > > Key: HIVE-14004 > URL: https://issues.apache.org/jira/browse/HIVE-14004 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.2.0 >Reporter: Eugene Koifman >Assignee: Matt McCline > Attachments: HIVE-14004.01.patch, HIVE-14004.02.patch > > > Easiest way to repro is to add TestTxnCommands2 > {noformat} > @Test > public void testCompactWithDelete() throws Exception { > int[][] tableData = {{1,2},{3,4}}; > runStatementOnDriver("insert into " + Table.ACIDTBL + "(a,b) " + > makeValuesClause(tableData)); > runStatementOnDriver("alter table "+ Table.ACIDTBL + " compact 'MAJOR'"); > Worker t = new Worker(); > t.setThreadId((int) t.getId()); > t.setHiveConf(hiveConf); > AtomicBoolean stop = new AtomicBoolean(); > AtomicBoolean looped = new AtomicBoolean(); > stop.set(true); > t.init(stop, looped); > t.run(); > runStatementOnDriver("delete from " + Table.ACIDTBL + " where b = 4"); > runStatementOnDriver("update " + Table.ACIDTBL + " set b = -2 where b = > 2"); > runStatementOnDriver("alter table "+ Table.ACIDTBL + " compact 'MINOR'"); > t.run(); > } > {noformat} > to TestTxnCommands2 and run it. > Test won't fail but if you look > in target/tmp/log/hive.log for the following exception (from Minor > compaction). > {noformat} > 2016-06-09T18:36:39,071 WARN [Thread-190[]]: mapred.LocalJobRunner > (LocalJobRunner.java:run(560)) - job_local1233973168_0005 > java.lang.Exception: java.lang.ArrayIndexOutOfBoundsException: 7 > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) > ~[hadoop-mapreduce-client-common-2.6.1.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) > [hadoop-mapreduce-client-common-2.6.1.jar:?] > Caused by: java.lang.ArrayIndexOutOfBoundsException: 7 > at > org.apache.orc.impl.SchemaEvolution.getFileType(SchemaEvolution.java:67) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2031) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1716) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2077) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1716) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2077) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.RecordReaderImpl.(RecordReaderImpl.java:208) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:63) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:365) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:207) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:508) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:1977) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:630) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:609) > ~[classes/:?] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-2.6.1.jar:?] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450) > ~[hadoop-mapreduce-client-core-2.6.1.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) > ~[hadoop-mapreduce-client-core-2.6.1.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) > ~[hadoop-mapreduce-client-common-2.6.1.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) >
[jira] [Updated] (HIVE-14004) Minor compaction produces ArrayIndexOutOfBoundsException: 7 in SchemaEvolution.getFileType
[ https://issues.apache.org/jira/browse/HIVE-14004?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline updated HIVE-14004: Status: In Progress (was: Patch Available) > Minor compaction produces ArrayIndexOutOfBoundsException: 7 in > SchemaEvolution.getFileType > -- > > Key: HIVE-14004 > URL: https://issues.apache.org/jira/browse/HIVE-14004 > Project: Hive > Issue Type: Bug > Components: Transactions >Affects Versions: 2.2.0 >Reporter: Eugene Koifman >Assignee: Matt McCline > Attachments: HIVE-14004.01.patch > > > Easiest way to repro is to add TestTxnCommands2 > {noformat} > @Test > public void testCompactWithDelete() throws Exception { > int[][] tableData = {{1,2},{3,4}}; > runStatementOnDriver("insert into " + Table.ACIDTBL + "(a,b) " + > makeValuesClause(tableData)); > runStatementOnDriver("alter table "+ Table.ACIDTBL + " compact 'MAJOR'"); > Worker t = new Worker(); > t.setThreadId((int) t.getId()); > t.setHiveConf(hiveConf); > AtomicBoolean stop = new AtomicBoolean(); > AtomicBoolean looped = new AtomicBoolean(); > stop.set(true); > t.init(stop, looped); > t.run(); > runStatementOnDriver("delete from " + Table.ACIDTBL + " where b = 4"); > runStatementOnDriver("update " + Table.ACIDTBL + " set b = -2 where b = > 2"); > runStatementOnDriver("alter table "+ Table.ACIDTBL + " compact 'MINOR'"); > t.run(); > } > {noformat} > to TestTxnCommands2 and run it. > Test won't fail but if you look > in target/tmp/log/hive.log for the following exception (from Minor > compaction). > {noformat} > 2016-06-09T18:36:39,071 WARN [Thread-190[]]: mapred.LocalJobRunner > (LocalJobRunner.java:run(560)) - job_local1233973168_0005 > java.lang.Exception: java.lang.ArrayIndexOutOfBoundsException: 7 > at > org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462) > ~[hadoop-mapreduce-client-common-2.6.1.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:522) > [hadoop-mapreduce-client-common-2.6.1.jar:?] > Caused by: java.lang.ArrayIndexOutOfBoundsException: 7 > at > org.apache.orc.impl.SchemaEvolution.getFileType(SchemaEvolution.java:67) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2031) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1716) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2077) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory$StructTreeReader.(TreeReaderFactory.java:1716) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.TreeReaderFactory.createTreeReader(TreeReaderFactory.java:2077) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.orc.impl.RecordReaderImpl.(RecordReaderImpl.java:208) > ~[hive-orc-2.2.0-SNAPSHOT.jar:2.2.0-SNAPSHOT] > at > org.apache.hadoop.hive.ql.io.orc.RecordReaderImpl.(RecordReaderImpl.java:63) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.ReaderImpl.rowsOptions(ReaderImpl.java:365) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger$ReaderPair.(OrcRawRecordMerger.java:207) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:508) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRawReader(OrcInputFormat.java:1977) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:630) > ~[classes/:?] > at > org.apache.hadoop.hive.ql.txn.compactor.CompactorMR$CompactorMap.map(CompactorMR.java:609) > ~[classes/:?] > at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54) > ~[hadoop-mapreduce-client-core-2.6.1.jar:?] > at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450) > ~[hadoop-mapreduce-client-core-2.6.1.jar:?] > at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) > ~[hadoop-mapreduce-client-core-2.6.1.jar:?] > at > org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:243) > ~[hadoop-mapreduce-client-common-2.6.1.jar:?] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) > ~[?:1.7.0_71] > at java.util.concurrent.FutureTask.run(FutureTask.java:262) > ~[?:1.7.0_71] > at >
[jira] [Commented] (HIVE-14198) Refactor aux jar related code to make them more consistent
[ https://issues.apache.org/jira/browse/HIVE-14198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369319#comment-15369319 ] Hive QA commented on HIVE-14198: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816908/HIVE-14198.1.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 10296 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_map_queries org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_binary_map_queries_prefix org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_joins org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_null_first_col org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_queries org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbase_single_sourced_multi_insert org.apache.hadoop.hive.cli.TestHBaseCliDriver.testCliDriver_hbasestats org.apache.hadoop.hive.cli.TestHBaseMinimrCliDriver.testCliDriver_hbase_bulk org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_join org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/445/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/445/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-445/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 15 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12816908 - PreCommit-HIVE-MASTER-Build > Refactor aux jar related code to make them more consistent > -- > > Key: HIVE-14198 > URL: https://issues.apache.org/jira/browse/HIVE-14198 > Project: Hive > Issue Type: Improvement > Components: Query Planning >Affects Versions: 2.2.0 >Reporter: Aihua Xu >Assignee: Aihua Xu > Attachments: HIVE-14198.1.patch > > > There are some redundancy and inconsistency between hive.aux.jar.paths and > hive.reloadable.aux.jar.paths and also between MR and spark. > Refactor the code to reuse the same code. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13884) Disallow queries in HMS fetching more than a configured number of partitions
[ https://issues.apache.org/jira/browse/HIVE-13884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369306#comment-15369306 ] Lefty Leverenz commented on HIVE-13884: --- [~spena], did you see this reply? > Disallow queries in HMS fetching more than a configured number of partitions > > > Key: HIVE-13884 > URL: https://issues.apache.org/jira/browse/HIVE-13884 > Project: Hive > Issue Type: Improvement >Reporter: Mohit Sabharwal >Assignee: Sergio Peña > Labels: TODOC2.2 > Fix For: 2.2.0 > > Attachments: HIVE-13884.1.patch, HIVE-13884.10.patch, > HIVE-13884.2.patch, HIVE-13884.3.patch, HIVE-13884.4.patch, > HIVE-13884.5.patch, HIVE-13884.6.patch, HIVE-13884.7.patch, > HIVE-13884.8.patch, HIVE-13884.9.patch > > > Currently the PartitionPruner requests either all partitions or partitions > based on filter expression. In either scenarios, if the number of partitions > accessed is large there can be significant memory pressure at the HMS server > end. > We already have a config {{hive.limit.query.max.table.partition}} that > enforces limits on number of partitions that may be scanned per operator. But > this check happens after the PartitionPruner has already fetched all > partitions. > We should add an option at PartitionPruner level to disallow queries that > attempt to access number of partitions beyond a configurable limit. > Note that {{hive.mapred.mode=strict}} disallow queries without a partition > filter in PartitionPruner, but this check accepts any query with a pruning > condition, even if partitions fetched are large. In multi-tenant > environments, admins could use more control w.r.t. number of partitions > allowed based on HMS memory capacity. > One option is to have PartitionPruner first fetch the partition names > (instead of partition specs) and throw an exception if number of partitions > exceeds the configured value. Otherwise, fetch the partition specs. > Looks like the existing {{listPartitionNames}} call could be used if extended > to take partition filter expressions like {{getPartitionsByExpr}} call does. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14169) Honor --incremental flag only if TableOutputFormat is used
[ https://issues.apache.org/jira/browse/HIVE-14169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369277#comment-15369277 ] Hive QA commented on HIVE-14169: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816907/HIVE-14169.1.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 7 failed/errored test(s), 10296 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_join org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/444/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/444/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-444/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 7 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12816907 - PreCommit-HIVE-MASTER-Build > Honor --incremental flag only if TableOutputFormat is used > -- > > Key: HIVE-14169 > URL: https://issues.apache.org/jira/browse/HIVE-14169 > Project: Hive > Issue Type: Sub-task > Components: Beeline >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-14169.1.patch > > > * When Beeline prints out a {{ResultSet}} to stdout it uses the > {{BeeLine.print}} method > * This method takes the {{ResultSet}} from the completed query and uses a > specified {{OutputFormat}} to print the rows (by default it uses > {{TableOutputFormat}}) > * The {{print}} method also wraps the {{ResultSet}} into a {{Rows}} class > (either a {{IncrementalRows}} or a {{BufferedRows}} class) > The advantage of {{BufferedRows}} is that it can do a global calculation of > the column width, however, this is only useful for {{TableOutputFormat}}. So > there is no need to buffer all the rows if a different {{OutputFormat}} is > used. This JIRA will change the behavior of the {{--incremental}} flag so > that it is only honored if {{TableOutputFormat}} is used. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14094) Remove unused function closeFs from Warehouse.java
[ https://issues.apache.org/jira/browse/HIVE-14094?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao Sun updated HIVE-14094: Resolution: Fixed Fix Version/s: 2.2.0 Status: Resolved (was: Patch Available) Committed to the master branch. Thanks Zhihai for the patch! > Remove unused function closeFs from Warehouse.java > -- > > Key: HIVE-14094 > URL: https://issues.apache.org/jira/browse/HIVE-14094 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: zhihai xu >Assignee: zhihai xu >Priority: Trivial > Fix For: 2.2.0 > > Attachments: HIVE-14094.000.patch > > > Remove unused function closeFs from Warehouse.java > after HIVE-10922, no one will call Warehouse.closeFs. It will be good to > delete this function to prevent people from using it. Normally closing > FileSystem is not safe because most of the time FileSystem will be shared. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14202) Change tez version used to 0.8.4
[ https://issues.apache.org/jira/browse/HIVE-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-14202: -- Status: Patch Available (was: Open) > Change tez version used to 0.8.4 > > > Key: HIVE-14202 > URL: https://issues.apache.org/jira/browse/HIVE-14202 > Project: Hive > Issue Type: Task >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-14202.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14202) Change tez version used to 0.8.4
[ https://issues.apache.org/jira/browse/HIVE-14202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Seth updated HIVE-14202: -- Attachment: HIVE-14202.01.patch > Change tez version used to 0.8.4 > > > Key: HIVE-14202 > URL: https://issues.apache.org/jira/browse/HIVE-14202 > Project: Hive > Issue Type: Task >Reporter: Siddharth Seth >Assignee: Siddharth Seth > Attachments: HIVE-14202.01.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14111) better concurrency handling for TezSessionState - part I
[ https://issues.apache.org/jira/browse/HIVE-14111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369233#comment-15369233 ] Hive QA commented on HIVE-14111: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816891/HIVE-14111.05.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/443/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/443/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-443/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n /usr/java/jdk1.8.0_25 ]] + export JAVA_HOME=/usr/java/jdk1.8.0_25 + JAVA_HOME=/usr/java/jdk1.8.0_25 + export PATH=/usr/java/jdk1.8.0_25/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/java/jdk1.8.0_25/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-MASTER-Build-443/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 5cf0c1c..667952d master -> origin/master f361403..5d8df7c branch-2.1 -> origin/branch-2.1 + git reset --hard HEAD HEAD is now at 5cf0c1c HIVE-13901 : Hivemetastore add partitions can be slow depending on filesystems (Rajesh Balamohan via Sergey Shelukhin) + git clean -f -d + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at 667952d HIVE-14184: Adding test for limit pushdown in presence of grouping sets (Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan) + git merge --ff-only origin/master Already up-to-date. + git gc + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12816891 - PreCommit-HIVE-MASTER-Build > better concurrency handling for TezSessionState - part I > > > Key: HIVE-14111 > URL: https://issues.apache.org/jira/browse/HIVE-14111 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-14111.01.patch, HIVE-14111.02.patch, > HIVE-14111.03.patch, HIVE-14111.04.patch, HIVE-14111.05.patch, > HIVE-14111.patch, sessionPoolNotes.txt > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14172) LLAP: force evict blocks by size to handle memory fragmentation
[ https://issues.apache.org/jira/browse/HIVE-14172?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369232#comment-15369232 ] Hive QA commented on HIVE-14172: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816890/HIVE-14172.01.patch {color:green}SUCCESS:{color} +1 due to 4 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 10281 tests executed *Failed tests:* {noformat} TestMiniTezCliDriver-tez_joins_explain.q-vector_data_types.q-tez_dynpart_hashjoin_1.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_join org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver org.apache.hadoop.hive.llap.tezplugins.TestLlapTaskSchedulerService.testDelayedLocalityNodeCommErrorImmediateAllocation {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/442/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/442/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-442/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 10 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12816890 - PreCommit-HIVE-MASTER-Build > LLAP: force evict blocks by size to handle memory fragmentation > --- > > Key: HIVE-14172 > URL: https://issues.apache.org/jira/browse/HIVE-14172 > Project: Hive > Issue Type: Bug >Reporter: Nita Dembla >Assignee: Sergey Shelukhin > Attachments: HIVE-14172.01.patch, HIVE-14172.patch > > > In the long run, we should replace buddy allocator with a better scheme. For > now do a workaround for fragmentation that cannot be easily resolved. It's > still not perfect but works for practical ORC cases, where we have the > default size and smaller blocks, rather than large allocations having trouble. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14184) Adding test for limit pushdown in presence of grouping sets
[ https://issues.apache.org/jira/browse/HIVE-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-14184: --- Resolution: Fixed Fix Version/s: 2.1.1 2.2.0 Status: Resolved (was: Patch Available) > Adding test for limit pushdown in presence of grouping sets > --- > > Key: HIVE-14184 > URL: https://issues.apache.org/jira/browse/HIVE-14184 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0, 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Fix For: 2.2.0, 2.1.1 > > Attachments: HIVE-14184.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13763) Update smart-apply-patch.sh with ability to use patches from git
[ https://issues.apache.org/jira/browse/HIVE-13763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369193#comment-15369193 ] Ashutosh Chauhan commented on HIVE-13763: - +1 > Update smart-apply-patch.sh with ability to use patches from git > > > Key: HIVE-13763 > URL: https://issues.apache.org/jira/browse/HIVE-13763 > Project: Hive > Issue Type: Improvement >Reporter: Owen O'Malley >Assignee: Owen O'Malley > Attachments: HIVE-13763.patch > > > Currently, the smart-apply-patch.sh doesn't understand git patches. It is > relatively easy to make it understand patches generated by: > {code} > % git format-patch apache/master --stdout > HIVE-999.patch > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13847) Avoid file open call in RecordReaderUtils as the stream is already available
[ https://issues.apache.org/jira/browse/HIVE-13847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13847: Status: Open (was: Patch Available) Needs more work. > Avoid file open call in RecordReaderUtils as the stream is already available > > > Key: HIVE-13847 > URL: https://issues.apache.org/jira/browse/HIVE-13847 > Project: Hive > Issue Type: Improvement > Components: ORC >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HIVE-13847.1.patch > > > File open call in RecordReaderUtils::readRowIndex can be avoided. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13887) LazySimpleSerDe should parse "NULL" dates faster
[ https://issues.apache.org/jira/browse/HIVE-13887?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369187#comment-15369187 ] Ashutosh Chauhan commented on HIVE-13887: - +1 > LazySimpleSerDe should parse "NULL" dates faster > > > Key: HIVE-13887 > URL: https://issues.apache.org/jira/browse/HIVE-13887 > Project: Hive > Issue Type: Bug > Components: Serializers/Deserializers, Vectorization >Affects Versions: 2.1.0 >Reporter: Gopal V >Assignee: Gopal V > Labels: Performance > Attachments: HIVE-13887.1.patch, HIVE-13887.1.patch > > > Date string which contain "NULL" or "(null)" are being parsed through a very > slow codepath involving exception handling as a normal codepath. > These are currently ~4x slower than parsing an actual date field. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13937) Unit test for HIVE-13051
[ https://issues.apache.org/jira/browse/HIVE-13937?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369180#comment-15369180 ] Ashutosh Chauhan commented on HIVE-13937: - +1 > Unit test for HIVE-13051 > > > Key: HIVE-13937 > URL: https://issues.apache.org/jira/browse/HIVE-13937 > Project: Hive > Issue Type: Improvement >Reporter: Zoltan Haindrich >Assignee: Zoltan Haindrich >Priority: Minor > Attachments: HIVE-13937.01.patch > > > unit test for HIVE-13051 ; it checks the issue prior to the fix, which > prevented further usage of a thread after an exception have occured -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13989) Extended ACLs are not handled according to specification
[ https://issues.apache.org/jira/browse/HIVE-13989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369179#comment-15369179 ] Ashutosh Chauhan commented on HIVE-13989: - [~spena] Might be of interest to you. > Extended ACLs are not handled according to specification > > > Key: HIVE-13989 > URL: https://issues.apache.org/jira/browse/HIVE-13989 > Project: Hive > Issue Type: Bug > Components: HCatalog >Affects Versions: 1.2.1, 2.0.0 >Reporter: Chris Drome >Assignee: Chris Drome > Attachments: HIVE-13989-branch-1.patch, HIVE-13989.1-branch-1.patch, > HIVE-13989.1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14094) Remove unused function closeFs from Warehouse.java
[ https://issues.apache.org/jira/browse/HIVE-14094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369174#comment-15369174 ] Ashutosh Chauhan commented on HIVE-14094: - +1 > Remove unused function closeFs from Warehouse.java > -- > > Key: HIVE-14094 > URL: https://issues.apache.org/jira/browse/HIVE-14094 > Project: Hive > Issue Type: Improvement > Components: Metastore >Reporter: zhihai xu >Assignee: zhihai xu >Priority: Trivial > Attachments: HIVE-14094.000.patch > > > Remove unused function closeFs from Warehouse.java > after HIVE-10922, no one will call Warehouse.closeFs. It will be good to > delete this function to prevent people from using it. Normally closing > FileSystem is not safe because most of the time FileSystem will be shared. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14188) LLAPIF: wrong user field is used from the token
[ https://issues.apache.org/jira/browse/HIVE-14188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369171#comment-15369171 ] Hive QA commented on HIVE-14188: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816893/HIVE-14188.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 10 failed/errored test(s), 10295 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_join org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver org.apache.hadoop.hive.llap.daemon.impl.TestLlapTokenChecker.testCheckPermissions org.apache.hadoop.hive.llap.daemon.impl.TestLlapTokenChecker.testGetToken {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/441/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/441/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-441/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 10 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12816893 - PreCommit-HIVE-MASTER-Build > LLAPIF: wrong user field is used from the token > --- > > Key: HIVE-14188 > URL: https://issues.apache.org/jira/browse/HIVE-14188 > Project: Hive > Issue Type: Bug >Reporter: Sergey Shelukhin >Assignee: Sergey Shelukhin > Attachments: HIVE-14188.patch, HIVE-14188.patch > > > realUser is not usually set in all cases for delegation tokens; we should use > the owner. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14113) Create function failed but function in show function list
[ https://issues.apache.org/jira/browse/HIVE-14113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369172#comment-15369172 ] Ashutosh Chauhan commented on HIVE-14113: - +1 > Create function failed but function in show function list > - > > Key: HIVE-14113 > URL: https://issues.apache.org/jira/browse/HIVE-14113 > Project: Hive > Issue Type: Bug > Components: UDF >Affects Versions: 1.2.0 >Reporter: niklaus xiao >Assignee: Navis > Fix For: 1.3.0 > > Attachments: HIVE-14113.1.patch > > > 1. create function with invalid hdfs path, /udf/udf-test.jar does not exists > {quote} > create function my_lower as 'com.tang.UDFLower' using jar > 'hdfs:///udf/udf-test.jar'; > {quote} > Failed with following exception: > {quote} > 0: jdbc:hive2://189.39.151.44:1/> create function my_lower as > 'com.tang.UDFLower' using jar 'hdfs:///udf/udf-test.jar'; > INFO : converting to local hdfs:///udf/udf-test.jar > ERROR : Failed to read external resource hdfs:///udf/udf-test.jar > java.lang.RuntimeException: Failed to read external resource > hdfs:///udf/udf-test.jar > at > org.apache.hadoop.hive.ql.session.SessionState.downloadResource(SessionState.java:1384) > at > org.apache.hadoop.hive.ql.session.SessionState.resolveAndDownload(SessionState.java:1340) > at > org.apache.hadoop.hive.ql.session.SessionState.add_resources(SessionState.java:1264) > at > org.apache.hadoop.hive.ql.session.SessionState.add_resources(SessionState.java:1250) > at > org.apache.hadoop.hive.ql.exec.FunctionTask.addFunctionResources(FunctionTask.java:306) > at > org.apache.hadoop.hive.ql.exec.Registry.registerToSessionRegistry(Registry.java:466) > at > org.apache.hadoop.hive.ql.exec.Registry.registerPermanentFunction(Registry.java:206) > at > org.apache.hadoop.hive.ql.exec.FunctionRegistry.registerPermanentFunction(FunctionRegistry.java:1551) > at > org.apache.hadoop.hive.ql.exec.FunctionTask.createPermanentFunction(FunctionTask.java:136) > at > org.apache.hadoop.hive.ql.exec.FunctionTask.execute(FunctionTask.java:75) > at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:158) > at > org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:101) > at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1965) > at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1723) > at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1475) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1283) > at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1278) > at > org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:167) > at > org.apache.hive.service.cli.operation.SQLOperation.access$200(SQLOperation.java:75) > at > org.apache.hive.service.cli.operation.SQLOperation$1$1.run(SQLOperation.java:245) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1711) > at > org.apache.hive.service.cli.operation.SQLOperation$1.run(SQLOperation.java:258) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > Caused by: java.io.FileNotFoundException: File does not exist: > hdfs:/udf/udf-test.jar > at > org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1391) > at > org.apache.hadoop.hdfs.DistributedFileSystem$25.doCall(DistributedFileSystem.java:1383) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1383) > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:340) > at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292) > at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2034) > at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:2003) > at org.apache.hadoop.fs.FileSystem.copyToLocalFile(FileSystem.java:1979) > at > org.apache.hadoop.hive.ql.session.SessionState.downloadResource(SessionState.java:1370) > ... 28 more > ERROR : Failed to register default.my_lower using class com.tang.UDFLower > Error: Error while processing statement: FAILED: Execution Error, return code > 1
[jira] [Commented] (HIVE-14115) Custom FetchFormatter is not supported
[ https://issues.apache.org/jira/browse/HIVE-14115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369169#comment-15369169 ] Ashutosh Chauhan commented on HIVE-14115: - +1 > Custom FetchFormatter is not supported > -- > > Key: HIVE-14115 > URL: https://issues.apache.org/jira/browse/HIVE-14115 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0 >Reporter: Ryu Kobayashi >Assignee: Ryu Kobayashi >Priority: Minor > Attachments: HIVE-14115.01.patch > > > The following code is supported only FetchFormatter of ThriftFormatter and > DefaultFetchFormatter. It can not be used Custom FetchFormatter. > {code} > if (SessionState.get().isHiveServerQuery()) { > > conf.set(SerDeUtils.LIST_SINK_OUTPUT_FORMATTER,ThriftFormatter.class.getName()); > } else { > conf.set(SerDeUtils.LIST_SINK_OUTPUT_FORMATTER, > DefaultFetchFormatter.class.getName()); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14129) Execute move tasks in parallel
[ https://issues.apache.org/jira/browse/HIVE-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-14129: Attachment: HIVE-14129.patch Taking another look both issues pointed on HIVE-9665 are no longer an issue and thus this patch is ready to go in. [~thejas] would you like to review it? > Execute move tasks in parallel > -- > > Key: HIVE-14129 > URL: https://issues.apache.org/jira/browse/HIVE-14129 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Attachments: HIVE-14129.patch, HIVE-14129.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14129) Execute move tasks in parallel
[ https://issues.apache.org/jira/browse/HIVE-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-14129: Status: Patch Available (was: Open) > Execute move tasks in parallel > -- > > Key: HIVE-14129 > URL: https://issues.apache.org/jira/browse/HIVE-14129 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Attachments: HIVE-14129.patch, HIVE-14129.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14129) Execute move tasks in parallel
[ https://issues.apache.org/jira/browse/HIVE-14129?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-14129: Assignee: Ashutosh Chauhan Status: Open (was: Patch Available) > Execute move tasks in parallel > -- > > Key: HIVE-14129 > URL: https://issues.apache.org/jira/browse/HIVE-14129 > Project: Hive > Issue Type: Improvement > Components: Query Processor >Reporter: Ashutosh Chauhan >Assignee: Ashutosh Chauhan > Attachments: HIVE-14129.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-13901) Hivemetastore add partitions can be slow depending on filesystems
[ https://issues.apache.org/jira/browse/HIVE-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-13901: Resolution: Fixed Fix Version/s: 2.1.1 2.2.0 Status: Resolved (was: Patch Available) Pushed to master & branch-2.1 Thanks, Rajesh! > Hivemetastore add partitions can be slow depending on filesystems > - > > Key: HIVE-13901 > URL: https://issues.apache.org/jira/browse/HIVE-13901 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Fix For: 2.2.0, 2.1.1 > > Attachments: HIVE-13901.1.patch, HIVE-13901.2.patch, > HIVE-13901.6.patch, HIVE-13901.7.patch, HIVE-13901.8.patch, HIVE-13901.9.patch > > > Depending on FS, creating external tables & adding partitions can be > expensive (e.g msck which adds all partitions). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14175) Fix creating buckets without scheme information
[ https://issues.apache.org/jira/browse/HIVE-14175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-14175: Status: Patch Available (was: Open) > Fix creating buckets without scheme information > --- > > Key: HIVE-14175 > URL: https://issues.apache.org/jira/browse/HIVE-14175 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 2.1.0, 1.2.1 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Labels: patch > Attachments: HIVE-14175.patch, HIVE-14175.patch > > > If a table is created on a non-default filesystem (i.e. non-hdfs), the empty > files will be created with incorrect scheme information. This patch extracts > the scheme and authority information for the new paths. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14175) Fix creating buckets without scheme information
[ https://issues.apache.org/jira/browse/HIVE-14175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-14175: Attachment: HIVE-14175.patch > Fix creating buckets without scheme information > --- > > Key: HIVE-14175 > URL: https://issues.apache.org/jira/browse/HIVE-14175 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 1.2.1, 2.1.0 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Labels: patch > Attachments: HIVE-14175.patch, HIVE-14175.patch > > > If a table is created on a non-default filesystem (i.e. non-hdfs), the empty > files will be created with incorrect scheme information. This patch extracts > the scheme and authority information for the new paths. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14175) Fix creating buckets without scheme information
[ https://issues.apache.org/jira/browse/HIVE-14175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-14175: Status: Open (was: Patch Available) > Fix creating buckets without scheme information > --- > > Key: HIVE-14175 > URL: https://issues.apache.org/jira/browse/HIVE-14175 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 2.1.0, 1.2.1 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Labels: patch > Attachments: HIVE-14175.patch > > > If a table is created on a non-default filesystem (i.e. non-hdfs), the empty > files will be created with incorrect scheme information. This patch extracts > the scheme and authority information for the new paths. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14175) Fix creating buckets without scheme information
[ https://issues.apache.org/jira/browse/HIVE-14175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-14175: Status: Patch Available (was: Open) > Fix creating buckets without scheme information > --- > > Key: HIVE-14175 > URL: https://issues.apache.org/jira/browse/HIVE-14175 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 2.1.0, 1.2.1 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Labels: patch > Attachments: HIVE-14175.patch > > > If a table is created on a non-default filesystem (i.e. non-hdfs), the empty > files will be created with incorrect scheme information. This patch extracts > the scheme and authority information for the new paths. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14175) Fix creating buckets without scheme information
[ https://issues.apache.org/jira/browse/HIVE-14175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashutosh Chauhan updated HIVE-14175: Status: Open (was: Patch Available) > Fix creating buckets without scheme information > --- > > Key: HIVE-14175 > URL: https://issues.apache.org/jira/browse/HIVE-14175 > Project: Hive > Issue Type: Bug > Components: Query Processor >Affects Versions: 2.1.0, 1.2.1 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Labels: patch > Attachments: HIVE-14175.patch > > > If a table is created on a non-default filesystem (i.e. non-hdfs), the empty > files will be created with incorrect scheme information. This patch extracts > the scheme and authority information for the new paths. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-13901) Hivemetastore add partitions can be slow depending on filesystems
[ https://issues.apache.org/jira/browse/HIVE-13901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369119#comment-15369119 ] Ashutosh Chauhan commented on HIVE-13901: - +1 > Hivemetastore add partitions can be slow depending on filesystems > - > > Key: HIVE-13901 > URL: https://issues.apache.org/jira/browse/HIVE-13901 > Project: Hive > Issue Type: Sub-task > Components: Metastore >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > Attachments: HIVE-13901.1.patch, HIVE-13901.2.patch, > HIVE-13901.6.patch, HIVE-13901.7.patch, HIVE-13901.8.patch, HIVE-13901.9.patch > > > Depending on FS, creating external tables & adding partitions can be > expensive (e.g msck which adds all partitions). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14173) NPE was thrown after enabling directsql in the middle of session
[ https://issues.apache.org/jira/browse/HIVE-14173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369113#comment-15369113 ] Hive QA commented on HIVE-14173: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816889/HIVE-14173.patch {color:red}ERROR:{color} -1 due to no test(s) being added or modified. {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10295 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_index_auto_mult_tables_compact org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_join org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/440/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/440/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-440/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12816889 - PreCommit-HIVE-MASTER-Build > NPE was thrown after enabling directsql in the middle of session > > > Key: HIVE-14173 > URL: https://issues.apache.org/jira/browse/HIVE-14173 > Project: Hive > Issue Type: Bug > Components: Metastore >Reporter: Chaoyu Tang >Assignee: Chaoyu Tang > Attachments: HIVE-14173.patch, HIVE-14173.patch, HIVE-14173.patch > > > hive.metastore.try.direct.sql is initially set to false in HMS hive-site.xml, > then changed to true using set metaconf command in the middle of a session, > running a query will be thrown NPE with error message is as following: > {code} > 2016-07-06T17:44:41,489 ERROR [pool-5-thread-2]: metastore.RetryingHMSHandler > (RetryingHMSHandler.java:invokeInternal(192)) - > MetaException(message:java.lang.NullPointerException) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newMetaException(HiveMetaStore.java:5741) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.rethrowException(HiveMetaStore.java:4771) > at > org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_partitions_by_expr(HiveMetaStore.java:4754) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:606) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) > at > org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) > at com.sun.proxy.$Proxy18.get_partitions_by_expr(Unknown Source) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_by_expr.getResult(ThriftHiveMetastore.java:12048) > at > org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_partitions_by_expr.getResult(ThriftHiveMetastore.java:12032) > at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) > at > org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) > at > org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at >
[jira] [Commented] (HIVE-14170) Beeline IncrementalRows should buffer rows and incrementally re-calculate width if TableOutputFormat is used
[ https://issues.apache.org/jira/browse/HIVE-14170?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369080#comment-15369080 ] Hive QA commented on HIVE-14170: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816753/HIVE-14170.1.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10296 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_13 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_join org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/439/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/439/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-439/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12816753 - PreCommit-HIVE-MASTER-Build > Beeline IncrementalRows should buffer rows and incrementally re-calculate > width if TableOutputFormat is used > > > Key: HIVE-14170 > URL: https://issues.apache.org/jira/browse/HIVE-14170 > Project: Hive > Issue Type: Sub-task > Components: Beeline >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-14170.1.patch > > > If {{--incremental}} is specified in Beeline, rows are meant to be printed > out immediately. However, if {{TableOutputFormat}} is used with this option > the formatting can look really off. > The reason is that {{IncrementalRows}} does not do a global calculation of > the optimal width size for {{TableOutputFormat}} (it can't because it only > sees one row at a time). The output of {{BufferedRows}} looks much better > because it can do this global calculation. > If {{--incremental}} is used, and {{TableOutputFormat}} is used, the width > should be re-calculated every "x" rows ("x" can be configurable and by > default it can be 1000). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14137) Hive on Spark throws FileAlreadyExistsException for jobs with multiple empty tables
[ https://issues.apache.org/jira/browse/HIVE-14137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369057#comment-15369057 ] Hive QA commented on HIVE-14137: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816881/HIVE-14137.4.patch {color:red}ERROR:{color} -1 due to build exiting with an error Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/438/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/438/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-438/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Tests exited with: NonZeroExitCodeException Command 'bash /data/hive-ptest/working/scratch/source-prep.sh' failed with exit status 1 and output '+ [[ -n /usr/java/jdk1.8.0_25 ]] + export JAVA_HOME=/usr/java/jdk1.8.0_25 + JAVA_HOME=/usr/java/jdk1.8.0_25 + export PATH=/usr/java/jdk1.8.0_25/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + PATH=/usr/java/jdk1.8.0_25/bin/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games + export 'ANT_OPTS=-Xmx1g -XX:MaxPermSize=256m ' + ANT_OPTS='-Xmx1g -XX:MaxPermSize=256m ' + export 'M2_OPTS=-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + M2_OPTS='-Xmx1g -XX:MaxPermSize=256m -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128' + cd /data/hive-ptest/working/ + tee /data/hive-ptest/logs/PreCommit-HIVE-MASTER-Build-438/source-prep.txt + [[ false == \t\r\u\e ]] + mkdir -p maven ivy + [[ git = \s\v\n ]] + [[ git = \g\i\t ]] + [[ -z master ]] + [[ -d apache-github-source-source ]] + [[ ! -d apache-github-source-source/.git ]] + [[ ! -d apache-github-source-source ]] + cd apache-github-source-source + git fetch origin >From https://github.com/apache/hive 7a91bbf..0506161 master -> origin/master cec61d9..0ba089b branch-2.1 -> origin/branch-2.1 + git reset --hard HEAD HEAD is now at 7a91bbf HIVE-14114 Ensure RecordWriter in streaming API is using the same UserGroupInformation as StreamingConnection (Eugene Koifman, reviewed by Wei Zheng) + git clean -f -d Removing ql/src/test/queries/clientpositive/groupby_grouping_sets_limit.q Removing ql/src/test/results/clientpositive/groupby_grouping_sets_limit.q.out + git checkout master Already on 'master' Your branch is behind 'origin/master' by 1 commit, and can be fast-forwarded. (use "git pull" to update your local branch) + git reset --hard origin/master HEAD is now at 0506161 HIVE-14176: CBO nesting windowing function within each other when merging Project operators (Jesus Camacho Rodriguez, reviewed by Ashutosh Chauhan) + git merge --ff-only origin/master Already up-to-date. + git gc + patchCommandPath=/data/hive-ptest/working/scratch/smart-apply-patch.sh + patchFilePath=/data/hive-ptest/working/scratch/build.patch + [[ -f /data/hive-ptest/working/scratch/build.patch ]] + chmod +x /data/hive-ptest/working/scratch/smart-apply-patch.sh + /data/hive-ptest/working/scratch/smart-apply-patch.sh /data/hive-ptest/working/scratch/build.patch The patch does not appear to apply with p0, p1, or p2 + exit 1 ' {noformat} This message is automatically generated. ATTACHMENT ID: 12816881 - PreCommit-HIVE-MASTER-Build > Hive on Spark throws FileAlreadyExistsException for jobs with multiple empty > tables > --- > > Key: HIVE-14137 > URL: https://issues.apache.org/jira/browse/HIVE-14137 > Project: Hive > Issue Type: Bug > Components: Spark >Reporter: Sahil Takiar >Assignee: Sahil Takiar > Attachments: HIVE-14137.1.patch, HIVE-14137.2.patch, > HIVE-14137.3.patch, HIVE-14137.4.patch, HIVE-14137.patch > > > The following queries: > {code} > -- Setup > drop table if exists empty1; > create table empty1 (col1 bigint) stored as parquet tblproperties > ('parquet.compress'='snappy'); > drop table if exists empty2; > create table empty2 (col1 bigint, col2 bigint) stored as parquet > tblproperties ('parquet.compress'='snappy'); > drop table if exists empty3; > create table empty3 (col1 bigint) stored as parquet tblproperties > ('parquet.compress'='snappy'); > -- All empty HDFS directories. > -- Fails with [08S01]: Error while processing statement: FAILED: Execution > Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. > select empty1.col1 > from empty1 > inner join empty2 > on empty2.col1 = empty1.col1 > inner join empty3 > on empty3.col1 = empty2.col2; > -- Two empty HDFS directories. > -- Create an empty file in HDFS. > insert into empty1 select * from empty1 where false; > -- Same query fails with [08S01]: Error while processing statement: FAILED: > Execution
[jira] [Commented] (HIVE-14184) Adding test for limit pushdown in presence of grouping sets
[ https://issues.apache.org/jira/browse/HIVE-14184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369056#comment-15369056 ] Hive QA commented on HIVE-14184: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816613/HIVE-14184.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 9 failed/errored test(s), 10289 tests executed *Failed tests:* {noformat} TestMinimrCliDriver-leftsemijoin_mr.q-infer_bucket_sort_map_operators.q-bucket4.q-and-1-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_join org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler.org.apache.hive.hcatalog.hbase.TestPigHBaseStorageHandler {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/437/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/437/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-437/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 9 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12816613 - PreCommit-HIVE-MASTER-Build > Adding test for limit pushdown in presence of grouping sets > --- > > Key: HIVE-14184 > URL: https://issues.apache.org/jira/browse/HIVE-14184 > Project: Hive > Issue Type: Bug >Affects Versions: 2.1.0, 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14184.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14176) CBO nesting windowing function within each other when merging Project operators
[ https://issues.apache.org/jira/browse/HIVE-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jesus Camacho Rodriguez updated HIVE-14176: --- Resolution: Fixed Fix Version/s: 2.1.1 2.2.0 Status: Resolved (was: Patch Available) Pushed to master, branch-2.1. Thanks for the review [~ashutoshc]! > CBO nesting windowing function within each other when merging Project > operators > --- > > Key: HIVE-14176 > URL: https://issues.apache.org/jira/browse/HIVE-14176 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 2.1.0, 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Fix For: 2.2.0, 2.1.1 > > Attachments: HIVE-14176.patch > > > The translation into a physical plan does not support this way of expressing > windowing functions. Instead, we will not merge the Project operators when we > find this pattern. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14196) Disable LLAP IO when complex types are involved
[ https://issues.apache.org/jira/browse/HIVE-14196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-14196: - Summary: Disable LLAP IO when complex types are involved (was: Exclude LLAP IO complex types test) > Disable LLAP IO when complex types are involved > --- > > Key: HIVE-14196 > URL: https://issues.apache.org/jira/browse/HIVE-14196 > Project: Hive > Issue Type: Sub-task >Affects Versions: 2.1.0, 2.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-14196.1.patch > > > Let's exclude vector_complex_* tests added for llap which is currently broken > and fails in all test runs. We can re-enable it with HIVE-14089 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14196) Exclude LLAP IO complex types test
[ https://issues.apache.org/jira/browse/HIVE-14196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-14196: - Attachment: HIVE-14196.1.patch [~sershe] Can you please take a look? > Exclude LLAP IO complex types test > -- > > Key: HIVE-14196 > URL: https://issues.apache.org/jira/browse/HIVE-14196 > Project: Hive > Issue Type: Sub-task >Affects Versions: 2.1.0, 2.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-14196.1.patch > > > Let's exclude vector_complex_* tests added for llap which is currently broken > and fails in all test runs. We can re-enable it with HIVE-14089 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-14196) Exclude LLAP IO complex types test
[ https://issues.apache.org/jira/browse/HIVE-14196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth Jayachandran updated HIVE-14196: - Status: Patch Available (was: Open) > Exclude LLAP IO complex types test > -- > > Key: HIVE-14196 > URL: https://issues.apache.org/jira/browse/HIVE-14196 > Project: Hive > Issue Type: Sub-task >Affects Versions: 2.1.0, 2.2.0 >Reporter: Prasanth Jayachandran >Assignee: Prasanth Jayachandran > Attachments: HIVE-14196.1.patch > > > Let's exclude vector_complex_* tests added for llap which is currently broken > and fails in all test runs. We can re-enable it with HIVE-14089 patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14176) CBO nesting windowing function within each other when merging Project operators
[ https://issues.apache.org/jira/browse/HIVE-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15369015#comment-15369015 ] Hive QA commented on HIVE-14176: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816526/HIVE-14176.patch {color:green}SUCCESS:{color} +1 due to 1 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 5 failed/errored test(s), 10295 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_join org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/436/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/436/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-436/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 5 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12816526 - PreCommit-HIVE-MASTER-Build > CBO nesting windowing function within each other when merging Project > operators > --- > > Key: HIVE-14176 > URL: https://issues.apache.org/jira/browse/HIVE-14176 > Project: Hive > Issue Type: Bug > Components: CBO >Affects Versions: 2.1.0, 2.2.0 >Reporter: Jesus Camacho Rodriguez >Assignee: Jesus Camacho Rodriguez > Attachments: HIVE-14176.patch > > > The translation into a physical plan does not support this way of expressing > windowing functions. Instead, we will not merge the Project operators when we > find this pattern. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-14143) RawDataSize of RCFile is zero after analyze
[ https://issues.apache.org/jira/browse/HIVE-14143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nemon Lou reassigned HIVE-14143: Assignee: Nemon Lou (was: Abhishek) > RawDataSize of RCFile is zero after analyze > > > Key: HIVE-14143 > URL: https://issues.apache.org/jira/browse/HIVE-14143 > Project: Hive > Issue Type: Bug > Components: Statistics >Affects Versions: 1.2.1, 2.1.0 >Reporter: Nemon Lou >Assignee: Nemon Lou >Priority: Minor > Attachments: HIVE-14143.1.patch, HIVE-14143.patch > > > After running the following analyze command ,rawDataSize becomes zero for > rcfile tables. > {noformat} > analyze table RCFILE_TABLE compute statistics ; > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-14158) deal with derived column names
[ https://issues.apache.org/jira/browse/HIVE-14158?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15368979#comment-15368979 ] Hive QA commented on HIVE-14158: Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12816879/HIVE-14158.03.patch {color:green}SUCCESS:{color} +1 due to 2 test(s) being added or modified. {color:red}ERROR:{color} -1 due to 49 failed/errored test(s), 10281 tests executed *Failed tests:* {noformat} TestMiniTezCliDriver-tez_union_group_by.q-schema_evol_text_nonvec_mapwork_part_all_primitive.q-vector_left_outer_join2.q-and-12-more - did not produce a TEST-*.xml file org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_alter_view_as_select org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_view_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_view_3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_view_4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_view_disable_cbo_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_view_disable_cbo_3 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_authorization_view_disable_cbo_4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_unionDistinct_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_rp_views org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cbo_views org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_or_replace_view org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_view_defaultformats org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_create_view_partitioned org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cteViews org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cte_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_cte_4 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_explain_ddl org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_explain_dependency org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_lateral_view_noalias org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_list_bucket_dml_12 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_masking_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_selectDistinctStar org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_show_create_table_view org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_special_character_in_tabnames_1 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_stats_list_bucket org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_subquery_multiinsert org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_unionDistinct_2 org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_union_top_level org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vectorized_ptf org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_cte_2 org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_cte_4 org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_all org.apache.hadoop.hive.cli.TestMiniLlapCliDriver.testCliDriver_vector_complex_join org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_acid_globallimit org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cbo_views org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cte_2 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_cte_4 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_explainuser_1 org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_selectDistinctStar org.apache.hadoop.hive.cli.TestMiniTezCliDriver.testCliDriver_unionDistinct_2 org.apache.hadoop.hive.cli.TestMinimrCliDriver.org.apache.hadoop.hive.cli.TestMinimrCliDriver org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_alter_view_as_select_with_partition org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_alter_view_failure6 org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_invalidate_view1 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_top_level org.apache.hadoop.hive.ql.parse.TestColumnAccess.testJoinView1AndTable2 org.apache.hadoop.hive.ql.security.authorization.plugin.TestHiveAuthorizerCheckInvocation.testInputSomeColumnsUsedJoin {noformat} Test results: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/435/testReport Console output: https://builds.apache.org/job/PreCommit-HIVE-MASTER-Build/435/console Test logs: http://ec2-204-236-174-241.us-west-1.compute.amazonaws.com/logs/PreCommit-HIVE-MASTER-Build-435/ Messages: {noformat} Executing org.apache.hive.ptest.execution.TestCheckPhase Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: