[jira] [Created] (HIVE-19057) Query result caching cannot be disabled by client
Deepesh Khandelwal created HIVE-19057: - Summary: Query result caching cannot be disabled by client Key: HIVE-19057 URL: https://issues.apache.org/jira/browse/HIVE-19057 Project: Hive Issue Type: Bug Components: Query Planning Reporter: Deepesh Khandelwal HIVE-18513 introduced query results caching along with some toggles to control enabling/disabling it. We should whiltelist the following configs so that the end user can dynamically control it in their session. {noformat} hive.query.results.cache.enabled hive.query.results.cache.wait.for.pending.results {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-18631) Hive metastore schema initialization failing on mysql
Deepesh Khandelwal created HIVE-18631: - Summary: Hive metastore schema initialization failing on mysql Key: HIVE-18631 URL: https://issues.apache.org/jira/browse/HIVE-18631 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 3.0.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Hive metastore schema on mysql is broken after the commit for HIVE-18546. Following error is seen during schema initialization: {noformat} 0: jdbc:mysql://localhost.localdomain> CREATE TABLE IF NOT EXISTS `TBLS` ( `TBL_ID` bigint(20) NOT NULL, `CREATE_TIME` int(11) NOT NULL, `DB_ID` bigint( 20) DEFAULT NULL, `LAST_ACCESS_TIME` int(11) NOT NULL, `OWNER` varchar(767) CHAR ACTER SET latin1 COLLATE latin1_bin DEFAULT NULL, `RETENTION` int(11) NOT NULL, `SD_ID` bigint(20) DEFAULT NULL, `TBL_NAME` varchar(256) CHARACTER SET latin1 CO LLATE latin1_bin DEFAULT NULL, `TBL_TYPE` varchar(128) CHARACTER SET latin1 COLL ATE latin1_bin DEFAULT NULL, `VIEW_EXPANDED_TEXT` mediumtext, `VIEW_ORIGINAL_TEX T` mediumtext, `IS_REWRITE_ENABLED` bit(1) NOT NULL DEFAULT 0 PRIMARY KEY (`TBL_ ID`), UNIQUE KEY `UNIQUETABLE` (`TBL_NAME`,`DB_ID`), KEY `TBLS_N50` (`SD_ID`), K EY `TBLS_N49` (`DB_ID`), CONSTRAINT `TBLS_FK1` FOREIGN KEY (`SD_ID`) REFERENCES `SDS` (`SD_ID`), CONSTRAINT `TBLS_FK2` FOREIGN KEY (`DB_ID`) REFERENCES `DBS` (` DB_ID`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '(`TBL_ID`), UNIQUE KEY `UNIQUETABLE` (`TBL_NAME`,`DB_ID`), KEY `TBLS_N50` (`SD_I' at line 1 (state=42000,code=1064) Closing: 0: jdbc:mysql://localhost.localdomain/hivedb?createDatabaseIfNotExist=true org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !! Underlying cause: java.io.IOException : Schema script failed, errorcode 2 org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !! at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:586) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:559) at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1183) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:304) at org.apache.hadoop.util.RunJar.main(RunJar.java:218) Caused by: java.io.IOException: Schema script failed, errorcode 2 at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:957) at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:935) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:582) ... 8 more *** schemaTool failed ***{noformat} In the file metastore/scripts/upgrade/mysql/hive-schema-3.0.0.mysql.sql one of the column definitions in the `TBLS` table is missing a comma at the end {code:java} `IS_REWRITE_ENABLED` bit(1) NOT NULL DEFAULT 0{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-18579) Changes from HIVE-18495 introduced import paths from shaded jars
Deepesh Khandelwal created HIVE-18579: - Summary: Changes from HIVE-18495 introduced import paths from shaded jars Key: HIVE-18579 URL: https://issues.apache.org/jira/browse/HIVE-18579 Project: Hive Issue Type: Bug Components: Hive Affects Versions: 3.0.0 Reporter: Deepesh Khandelwal Assignee: Zoltan Haindrich When compiling the latest code after HIVE-18495 seeing the following issue: {noformat} - [ERROR] COMPILATION ERROR : - [ERROR] /grid/0/jenkins/workspace/Zuul_HDP_Build_Job/build-support/SOURCES/hive/ql/src/test/org/apache/hive/testutils/HiveTestEnvSetup.java:[29,64] package org.apache.hadoop.hbase.shaded.com.google.common.collect does not exist [ERROR] /grid/0/jenkins/workspace/Zuul_HDP_Build_Job/build-support/SOURCES/hive/ql/src/test/org/apache/hive/testutils/MiniZooKeeperCluster.java:[43,68] package org.apache.hadoop.hbase.shaded.com.google.common.annotations does not exist ERROR] /grid/0/jenkins/workspace/Zuul_HDP_Build_Job/build-support/SOURCES/hive/ql/src/test/org/apache/hive/testutils/MiniZooKeeperCluster.java:[100,4] cannot find symbol symbol: class VisibleForTesting location: class org.apache.hive.testutils.MiniZooKeeperCluster [INFO] 3 errors {noformat} Seems like org.apache.hadoop.hbase.shaded.com.google.* is being used, I am guessing the idea was to use com.google.*. Not sure why we didn't see this failing in Apache Hive build system. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-18465) Hive metastore schema initialization failing on postgres
Deepesh Khandelwal created HIVE-18465: - Summary: Hive metastore schema initialization failing on postgres Key: HIVE-18465 URL: https://issues.apache.org/jira/browse/HIVE-18465 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 3.0.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Hive metastore schema on postgres is broken after the commit for HIVE-14498. Following error is seen during schema initialization: {noformat} 0: jdbc:postgresql://localhost.localdomain:54> ALTER TABLE ONLY "MV_CREATION_MET ADATA" ADD CONSTRAINT "MV_CREATION_METADATA_FK" FOREIGN KEY ("TBL_ID") REFERENCE S "TBLS"("TBL_ID") DEFERRABLE Error: ERROR: there is no unique constraint matching given keys for referenced table "TBLS" (state=42830,code=0) Closing: 0: jdbc:postgresql://localhost.localdomain:5432/hive org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !! Underlying cause: java.io.IOException : Schema script failed, errorcode 2 org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !! at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:586) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:559) at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1183) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:239) at org.apache.hadoop.util.RunJar.main(RunJar.java:153) Caused by: java.io.IOException: Schema script failed, errorcode 2 at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:957) at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:935) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:582) ... 8 more *** schemaTool failed ***{noformat} In the file metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql the ordering of statement {noformat} ALTER TABLE ONLY "MV_CREATION_METADATA" ADD CONSTRAINT "MV_CREATION_METADATA_FK" FOREIGN KEY ("TBL_ID") REFERENCES "TBLS"("TBL_ID") DEFERRABLE;{noformat} is before the definition of unique constraints for TBLS which is causing the issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Created] (HIVE-18220) Workload Management tables have broken constraints defined on postgres schema
Deepesh Khandelwal created HIVE-18220: - Summary: Workload Management tables have broken constraints defined on postgres schema Key: HIVE-18220 URL: https://issues.apache.org/jira/browse/HIVE-18220 Project: Hive Issue Type: Bug Components: Metastore Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Priority: Blocker Schema initialization on Postgres fails with the following error: {noformat} 0: jdbc:postgresql://localhost.localdomain:54> ALTER TABLE ONLY "WM_POOL" ADD CO NSTRAINT "UNIQUE_WM_RESOURCEPLAN" UNIQUE ("NAME") Error: ERROR: column "NAME" named in key does not exist (state=42703,code=0) Closing: 0: jdbc:postgresql://localhost.localdomain:5432/hive org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !! Underlying cause: java.io.IOException : Schema script failed, errorcode 2 org.apache.hadoop.hive.metastore.HiveMetaException: Schema initialization FAILED! Metastore state would be inconsistent !! at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:586) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:559) at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1183) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.util.RunJar.run(RunJar.java:239) at org.apache.hadoop.util.RunJar.main(RunJar.java:153) Caused by: java.io.IOException: Schema script failed, errorcode 2 at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:957) at org.apache.hive.beeline.HiveSchemaTool.runBeeLine(HiveSchemaTool.java:935) at org.apache.hive.beeline.HiveSchemaTool.doInit(HiveSchemaTool.java:582) ... 8 more {noformat} It is due to couple on incorrect constraint definitions in the schema. -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HIVE-18198) TablePropertyEnrichmentOptimizer.java is missing the Apache license header
Deepesh Khandelwal created HIVE-18198: - Summary: TablePropertyEnrichmentOptimizer.java is missing the Apache license header Key: HIVE-18198 URL: https://issues.apache.org/jira/browse/HIVE-18198 Project: Hive Issue Type: Bug Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal This causes warnings in the yetus check: {quote} Lines that start with ? in the ASF License report indicate files that do not have an Apache license header: !? /data/hiveptest/working/yetus/ql/src/java/org/apache/hadoop/hive/ql/optimizer/TablePropertyEnrichmentOptimizer.java {quote} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HIVE-18195) Hive schema broken on postgres
Deepesh Khandelwal created HIVE-18195: - Summary: Hive schema broken on postgres Key: HIVE-18195 URL: https://issues.apache.org/jira/browse/HIVE-18195 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 3.0.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Priority: Blocker Hive metastore schema on postgres is broken after the commit for HIVE-17954. Basicaly the following file metastore/scripts/upgrade/postgres/hive-schema-3.0.0.postgres.sql incorrectly defines WM_POOL with column ALLOC_FRACTION with DOUBLE data type, it should be double precision. {noformat} CREATE TABLE "WM_POOL" ( "POOL_ID" bigint NOT NULL, "RP_ID" bigint NOT NULL, "PATH" character varying(1024) NOT NULL, "ALLOC_FRACTION" DOUBLE, "QUERY_PARALLELISM" integer, "SCHEDULING_POLICY" character varying(1024) ); {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029)
[jira] [Created] (HIVE-16234) Add support for quarter in trunc udf
Deepesh Khandelwal created HIVE-16234: - Summary: Add support for quarter in trunc udf Key: HIVE-16234 URL: https://issues.apache.org/jira/browse/HIVE-16234 Project: Hive Issue Type: Improvement Components: UDF Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Hive has a Date function trunc(string date, string format) that returns date truncated to the unit specified by the format. Supported formats: MONTH/MON/MM, YEAR//YY. Goal here is to extend support to QUARTER/Q. Example: SELECT trunc('2017-03-15', 'Q'); '2017-01-01' SELECT trunc('2017-12-31', 'Q'); '2017-10-01' -- This message was sent by Atlassian JIRA (v6.3.15#6346)
[jira] [Created] (HIVE-15220) WebHCat test not capturing end time of test accurately
Deepesh Khandelwal created HIVE-15220: - Summary: WebHCat test not capturing end time of test accurately Key: HIVE-15220 URL: https://issues.apache.org/jira/browse/HIVE-15220 Project: Hive Issue Type: Bug Components: Tests Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Priority: Trivial Webhcat e2e testsuite prints message while ending test run: {noformat} Ending test at 1479264720 {noformat} Currently it is not capturing the end time correctly. NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-14688) Hive drop call fails in presence of TDE
Deepesh Khandelwal created HIVE-14688: - Summary: Hive drop call fails in presence of TDE Key: HIVE-14688 URL: https://issues.apache.org/jira/browse/HIVE-14688 Project: Hive Issue Type: Bug Components: Security Affects Versions: 2.0.0, 1.2.1 Reporter: Deepesh Khandelwal In Hadoop 2.8.0 TDE trash collection was fixed through HDFS-8831. This enables us to make drop table calls for Hive managed tables where Hive metastore warehouse directory is in encrypted zone. However even with the feature in HDFS, Hive drop table currently fail: {noformat} $ hdfs crypto -listZones /apps/hive/warehouse key2 $ hdfs dfs -ls /apps/hive/warehouse Found 1 items drwxrwxrwt - hdfs hdfs 0 2016-09-01 02:54 /apps/hive/warehouse/.Trash hive> create table abc(a string, b int); OK Time taken: 5.538 seconds hive> dfs -ls /apps/hive/warehouse; Found 2 items drwxrwxrwt - hdfs hdfs 0 2016-09-01 02:54 /apps/hive/warehouse/.Trash drwxrwxrwx - deepesh hdfs 0 2016-09-01 17:15 /apps/hive/warehouse/abc hive> drop table if exists abc; FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. MetaException(message:Unable to drop default.abc because it is in an encryption zone and trash is enabled. Use PURGE option to skip trash.) {noformat} The problem lies here: {code:title=metastore/src/java/org/apache/hadoop/hive/metastore/HiveMetaStore.java} private void checkTrashPurgeCombination(Path pathToData, String objectName, boolean ifPurge) ... if (trashEnabled) { try { HadoopShims.HdfsEncryptionShim shim = ShimLoader.getHadoopShims().createHdfsEncryptionShim(FileSystem.get(hiveConf), hiveConf); if (shim.isPathEncrypted(pathToData)) { throw new MetaException("Unable to drop " + objectName + " because it is in an encryption zone" + " and trash is enabled. Use PURGE option to skip trash."); } } catch (IOException ex) { MetaException e = new MetaException(ex.getMessage()); e.initCause(ex); throw e; } } {code} As we can see that we are making an assumption that delete wouldn't be successful in encrypted zone. We need to modify this logic. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-14292) ACID table creation fails on mysql with MySQLIntegrityConstraintViolationException
Deepesh Khandelwal created HIVE-14292: - Summary: ACID table creation fails on mysql with MySQLIntegrityConstraintViolationException Key: HIVE-14292 URL: https://issues.apache.org/jira/browse/HIVE-14292 Project: Hive Issue Type: Bug Components: Transactions Affects Versions: 2.1.0, 1.2.1 Environment: MySQL Reporter: Deepesh Khandelwal Assignee: Eugene Koifman While creating a ACID table ran into the following error: {noformat} >>> create table acidcount1 (id int) clustered by (id) into 2 buckets stored as orc tblproperties('transactional'='true'); INFO : Compiling command(queryId=hive_20160719105944_bfe65377-59fa-4e17-941e-1f86b8daca15): create table acidcount1 (id int) clustered by (id) into 2 buckets stored as orc tblproperties('transactional'='true') INFO : Semantic Analysis Completed INFO : Returning Hive schema: Schema(fieldSchemas:null, properties:null) INFO : Completed compiling command(queryId=hive_20160719105944_bfe65377-59fa-4e17-941e-1f86b8daca15); Time taken: 0.111 seconds Error: Error running query: java.lang.RuntimeException: Unable to lock 'CheckLock' due to: Duplicate entry 'CheckLock-0' for key 'PRIMARY' (SQLState=23000, ErrorCode=1062) (state=,code=0) Aborting command set because "force" is false and command failed: "create table acidcount1 (id int) clustered by (id) into 2 buckets stored as orc tblproperties('transactional'='true');" {noformat} Saw the following detailed stack in the server log: {noformat} 2016-07-19T10:59:46,213 ERROR [HiveServer2-Background-Pool: Thread-463]: metastore.RetryingHMSHandler (RetryingHMSHandler.java:invokeInternal(196)) - java.lang.RuntimeException: Unable to lock 'CheckLock' due to: Duplicate entry 'CheckLock-0' for key 'PRIMARY' (SQLState=23000, ErrorCode=1062) at org.apache.hadoop.hive.metastore.txn.TxnHandler.acquireLock(TxnHandler.java:3235) at org.apache.hadoop.hive.metastore.txn.TxnHandler.checkLock(TxnHandler.java:2309) at org.apache.hadoop.hive.metastore.txn.TxnHandler.checkLockWithRetry(TxnHandler.java:1012) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:784) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5941) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:140) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:99) at com.sun.proxy.$Proxy26.lock(Unknown Source) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.lock(HiveMetaStoreClient.java:2109) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:154) at com.sun.proxy.$Proxy28.lock(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2259) at com.sun.proxy.$Proxy28.lock(Unknown Source) at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager$SynchronizedMetaStoreClient.lock(DbTxnManager.java:740) at org.apache.hadoop.hive.ql.lockmgr.DbLockManager.lock(DbLockManager.java:103) at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:341) at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocksWithHeartbeatDelay(DbTxnManager.java:357) at org.apache.hadoop.hive.ql.lockmgr.DbTxnManager.acquireLocks(DbTxnManager.java:167) at org.apache.hadoop.hive.ql.Driver.acquireLocksAndOpenTxn(Driver.java:980) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1316) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1090) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1083) at org.apache.hive.service.cli.operation.SQLOperation.runQuery(SQLOperation.java:242) at
[jira] [Created] (HIVE-14179) Too many delta files causes select queries on the table to fail with OOM
Deepesh Khandelwal created HIVE-14179: - Summary: Too many delta files causes select queries on the table to fail with OOM Key: HIVE-14179 URL: https://issues.apache.org/jira/browse/HIVE-14179 Project: Hive Issue Type: Bug Components: Transactions Reporter: Deepesh Khandelwal When a large number of delta files get generated during ACID operations, a select query on the ACID table fails with OOM. {noformat} ERROR [main]: SessionState (SessionState.java:printError(942)) - Vertex failed, vertexName=Map 1, vertexId=vertex_1465431842106_0014_1_00, diagnostics=[Task failed, taskId=task_1465431842106_0014_1_00_00, diagnostics=[TaskAttempt 0 failed, info=[Error: Failure while running task:java.lang.RuntimeException: java.lang.OutOfMemoryError: Direct buffer memory at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.initializeAndRunProcessor(TezProcessor.java:159) at org.apache.hadoop.hive.ql.exec.tez.TezProcessor.run(TezProcessor.java:139) at org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:347) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:181) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:172) at org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.callInternal(TezTaskRunner.java:168) at org.apache.tez.common.CallableWithNdc.call(CallableWithNdc.java:36) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.OutOfMemoryError: Direct buffer memory at java.nio.Bits.reserveMemory(Bits.java:693) at java.nio.DirectByteBuffer.(DirectByteBuffer.java:123) at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:311) at org.apache.hadoop.util.DirectBufferPool.getBuffer(DirectBufferPool.java:72) at org.apache.hadoop.hdfs.BlockReaderLocal.createDataBufIfNeeded(BlockReaderLocal.java:260) at org.apache.hadoop.hdfs.BlockReaderLocal.readWithBounceBuffer(BlockReaderLocal.java:601) at org.apache.hadoop.hdfs.BlockReaderLocal.read(BlockReaderLocal.java:569) at org.apache.hadoop.hdfs.DFSInputStream$ByteArrayStrategy.doRead(DFSInputStream.java:789) at org.apache.hadoop.hdfs.DFSInputStream.readBuffer(DFSInputStream.java:845) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:905) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:953) at java.io.DataInputStream.readFully(DataInputStream.java:195) at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.extractMetaInfoFromFooter(ReaderImpl.java:377) at org.apache.hadoop.hive.ql.io.orc.ReaderImpl.(ReaderImpl.java:323) at org.apache.hadoop.hive.ql.io.orc.OrcFile.createReader(OrcFile.java:238) at org.apache.hadoop.hive.ql.io.orc.OrcRawRecordMerger.(OrcRawRecordMerger.java:462) at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getReader(OrcInputFormat.java:1372) at org.apache.hadoop.hive.ql.io.orc.OrcInputFormat.getRecordReader(OrcInputFormat.java:1264) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getRecordReader(HiveInputFormat.java:251) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.initNextRecordReader(TezGroupedSplitsInputFormat.java:193) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat$TezGroupedSplitsRecordReader.(TezGroupedSplitsInputFormat.java:135) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getRecordReader(TezGroupedSplitsInputFormat.java:101) at org.apache.tez.mapreduce.lib.MRReaderMapred.setupOldRecordReader(MRReaderMapred.java:149) at org.apache.tez.mapreduce.lib.MRReaderMapred.setSplit(MRReaderMapred.java:80) at org.apache.tez.mapreduce.input.MRInput.initFromEventInternal(MRInput.java:650) at org.apache.tez.mapreduce.input.MRInput.initFromEvent(MRInput.java:621) at org.apache.tez.mapreduce.input.MRInputLegacy.checkAndAwaitRecordReaderInitialization(MRInputLegacy.java:145) at org.apache.tez.mapreduce.input.MRInputLegacy.init(MRInputLegacy.java:109) at
[jira] [Created] (HIVE-13856) Fetching transaction batches during ACID streaming against Hive Metastore using Oracle DB fails
Deepesh Khandelwal created HIVE-13856: - Summary: Fetching transaction batches during ACID streaming against Hive Metastore using Oracle DB fails Key: HIVE-13856 URL: https://issues.apache.org/jira/browse/HIVE-13856 Project: Hive Issue Type: Bug Components: Transactions Reporter: Deepesh Khandelwal {noformat} 2016-05-25 00:43:49,682 INFO [pool-4-thread-5]: txn.TxnHandler (TxnHandler.java:checkRetryable(1585)) - Non-retryable error: ORA-00933: SQL command not properly ended (SQLState=42000, ErrorCode=933) 2016-05-25 00:43:49,685 ERROR [pool-4-thread-5]: metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(159)) - MetaException(message:Unable to select from transaction database java.sql.SQLSyntaxErrorException: ORA-00933: SQL command not properly ended at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:440) at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:396) at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:837) at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:445) at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:191) at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:523) at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:193) at oracle.jdbc.driver.T4CStatement.executeForRows(T4CStatement.java:999) at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1315) at oracle.jdbc.driver.OracleStatement.executeInternal(OracleStatement.java:1890) at oracle.jdbc.driver.OracleStatement.execute(OracleStatement.java:1855) at oracle.jdbc.driver.OracleStatementWrapper.execute(OracleStatementWrapper.java:304) at com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:254) at org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxns(TxnHandler.java:429) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.open_txns(HiveMetaStore.java:5647) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) at com.sun.proxy.$Proxy15.open_txns(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$open_txns.getResult(ThriftHiveMetastore.java:11604) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$open_txns.getResult(ThriftHiveMetastore.java:11589) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) ) at org.apache.hadoop.hive.metastore.txn.TxnHandler.openTxns(TxnHandler.java:438) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.open_txns(HiveMetaStore.java:5647) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) at com.sun.proxy.$Proxy15.open_txns(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$open_txns.getResult(ThriftHiveMetastore.java:11604) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$open_txns.getResult(ThriftHiveMetastore.java:11589) at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39) at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110) at
[jira] [Created] (HIVE-12419) hive.log.trace.id needs to be whitelisted
Deepesh Khandelwal created HIVE-12419: - Summary: hive.log.trace.id needs to be whitelisted Key: HIVE-12419 URL: https://issues.apache.org/jira/browse/HIVE-12419 Project: Hive Issue Type: Bug Components: Tez Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 2.0.0 HIVE-12249 introduces hive.log.trace.id as part of improving logging for hive queries. The property needs to be added to SQL Std Auth whitelisted properties list to be usable with HiveServer2. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-11902) Abort txn cleanup thread throws SyntaxErrorException
Deepesh Khandelwal created HIVE-11902: - Summary: Abort txn cleanup thread throws SyntaxErrorException Key: HIVE-11902 URL: https://issues.apache.org/jira/browse/HIVE-11902 Project: Hive Issue Type: Bug Components: Transactions Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal When cleaning left over transactions we see the DeadTxnReaper code threw the following exception: {noformat} 2015-09-21 05:23:38,148 WARN [DeadTxnReaper-0]: txn.TxnHandler (TxnHandler.java:performTimeOuts(1876)) - Aborting timedout transactions failed due to You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')' at line 1(SQLState=42000,ErrorCode=1064) com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near ')' at line 1 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at com.mysql.jdbc.Util.handleNewInstance(Util.java:377) at com.mysql.jdbc.Util.getInstance(Util.java:360) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:978) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3887) at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3823) at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2435) at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2582) at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2526) at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1618) at com.mysql.jdbc.StatementImpl.executeUpdate(StatementImpl.java:1549) at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497) at org.apache.hadoop.hive.metastore.txn.TxnHandler.abortTxns(TxnHandler.java:1275) at org.apache.hadoop.hive.metastore.txn.TxnHandler.performTimeOuts(TxnHandler.java:1866) at org.apache.hadoop.hive.ql.txn.AcidHouseKeeperService$TimedoutTxnReaper.run(AcidHouseKeeperService.java:87) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) {noformat} The problem here is that the method {{abortTxns(Connection dbConn, List txnids)}} in metastore/src/java/org/apache/hadoop/hive/metastore/txn/TxnHandler.java creates the following bad query when txnids list is empty. {code} delete from HIVE_LOCKS where hl_txnid in (); {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-11628) DB type detection code is failing on Oracle 12
Deepesh Khandelwal created HIVE-11628: - Summary: DB type detection code is failing on Oracle 12 Key: HIVE-11628 URL: https://issues.apache.org/jira/browse/HIVE-11628 Project: Hive Issue Type: Bug Components: Metastore Environment: Oracle 12 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal DB type detection code is failing when using Oracle 12 as backing store. When determining qualification for direct SQL, in the logs following message is seen: {noformat} 2015-08-14 01:15:16,020 INFO [pool-6-thread-109]: metastore.MetaStoreDirectSql (MetaStoreDirectSql.java:init(131)) - Using direct SQL, underlying DB is OTHER {noformat} Currently in org/apache/hadoop/hive/metastore/MetaStoreDirectSql, there is a code snippet: {code} private DB determineDbType() { DB dbType = DB.OTHER; if (runDbCheck(SET @@session.sql_mode=ANSI_QUOTES, MySql)) { dbType = DB.MYSQL; } else if (runDbCheck(SELECT version from v$instance, Oracle)) { dbType = DB.ORACLE; } else if (runDbCheck(SELECT @@version, MSSQL)) { dbType = DB.MSSQL; } else { // TODO: maybe we should use getProductName to identify all the DBs String productName = getProductName(); if (productName != null productName.toLowerCase().contains(derby)) { dbType = DB.DERBY; } } return dbType; } {code} The code relies on access to v$instance in order to identify the backend DB as Oracle, but this can fail if users are not granted select privileges on v$ tables. An alternate way is specified on [Oracle Database Reference pages|http://docs.oracle.com/cd/B19306_01/server.102/b14237/statviews_4224.htm] works. I will attach a potential patch that should work. Without the patch the workaround here would be to grant select privileges on v$ tables. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-10724) WebHCat e2e test TestStreaming_5 fails on Windows
Deepesh Khandelwal created HIVE-10724: - Summary: WebHCat e2e test TestStreaming_5 fails on Windows Key: HIVE-10724 URL: https://issues.apache.org/jira/browse/HIVE-10724 Project: Hive Issue Type: Bug Components: Tests Affects Versions: 1.2.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal The test TestStreaming_5 fails with the following error on Windows: {noformat} Passed in parameter is incorrectly quoted: \\StreamXmlRecordReader,begin=xml,end=/xml\\ {noformat} The problem is the extra quotes in the post_options in the test {{'inputreader=StreamXmlRecordReader,begin=xml,end=/xml'}} Removing the double quotes {{'inputreader=StreamXmlRecordReader,begin=xml,end=/xml'}} makes the test happy on both Linux and Windows. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-10630) Renaming tables across encryption zones renames table even though the operation throws error
Deepesh Khandelwal created HIVE-10630: - Summary: Renaming tables across encryption zones renames table even though the operation throws error Key: HIVE-10630 URL: https://issues.apache.org/jira/browse/HIVE-10630 Project: Hive Issue Type: Sub-task Components: Security Reporter: Deepesh Khandelwal Create a table with data in an encrypted zone 1 and then rename it to encrypted zone 2. {noformat} hive alter table encdb1.testtbl rename to encdb2.testtbl; FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. Unable to access old location hdfs://node-1.example.com:8020/apps/hive/warehouse/encdb1.db/testtbl for table encdb1.testtbl {noformat} Even though the command errors out the table is renamed. I think the right behavior should be to not rename the table at all including the metadata. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-10629) Dropping table in an encrypted zone does not drop warehouse directory
Deepesh Khandelwal created HIVE-10629: - Summary: Dropping table in an encrypted zone does not drop warehouse directory Key: HIVE-10629 URL: https://issues.apache.org/jira/browse/HIVE-10629 Project: Hive Issue Type: Sub-task Components: Security Reporter: Deepesh Khandelwal Drop table in an encrypted zone removes the table but not its data. The client sees the following on Hive CLI: {noformat} hive drop table testtbl; OK Time taken: 0.158 seconds {noformat} On the Hive Metastore log following error is thrown: {noformat} 2015-05-05 08:55:27,665 ERROR [pool-6-thread-142]: hive.log (MetaStoreUtils.java:logAndThrowMetaException(1200)) - Got exception: java.io.IOException Failed to move to trash: hdfs://node-1.example.com:8020/apps/hive/warehouse/encdb1.db/testtbl java.io.IOException: Failed to move to trash: hdfs://node-1.example.com:8020/apps/hive/warehouse/encdb1.db/testtbl at org.apache.hadoop.fs.TrashPolicyDefault.moveToTrash(TrashPolicyDefault.java:160) at org.apache.hadoop.fs.Trash.moveToTrash(Trash.java:114) at org.apache.hadoop.fs.Trash.moveToAppropriateTrash(Trash.java:95) at org.apache.hadoop.hive.shims.Hadoop23Shims.moveToAppropriateTrash(Hadoop23Shims.java:270) at org.apache.hadoop.hive.metastore.HiveMetaStoreFsImpl.deleteDir(HiveMetaStoreFsImpl.java:47) at org.apache.hadoop.hive.metastore.Warehouse.deleteDir(Warehouse.java:229) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.deleteTableData(HiveMetaStore.java:1584) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_core(HiveMetaStore.java:1552) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.drop_table_with_environment_context(HiveMetaStore.java:1705) at sun.reflect.GeneratedMethodAccessor57.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107) at com.sun.proxy.$Proxy13.drop_table_with_environment_context(Unknown Source) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$drop_table_with_environment_context.getResult(ThriftHiveMetastore.java:9256) {noformat} The client should throw the error and maybe fail the drop table call. To delete the table data one currently has to use {{drop table testtbl purge}} which basically remove the table data permanently skipping trash. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-10074) Ability to run HCat Client Unit tests in a system test setting
Deepesh Khandelwal created HIVE-10074: - Summary: Ability to run HCat Client Unit tests in a system test setting Key: HIVE-10074 URL: https://issues.apache.org/jira/browse/HIVE-10074 Project: Hive Issue Type: Bug Components: Tests Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Following testsuite {{hcatalog/webhcat/java-client/src/test/java/org/apache/hive/hcatalog/api/TestHCatClient.java}} is a JUnit testsuite to test some basic HCat client API. During setup it brings up a Hive Metastore with embedded Derby. The testsuite however will be even more useful if it can be run against a running Hive Metastore (transparent to whatever backing DB its running against). -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9963) HiveServer2 deregister command doesn't provide any feedback
Deepesh Khandelwal created HIVE-9963: Summary: HiveServer2 deregister command doesn't provide any feedback Key: HIVE-9963 URL: https://issues.apache.org/jira/browse/HIVE-9963 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal HiveServer2 deregister functionality provided by HIVE-8288 doesn't provide any feedback upon completion. Here is a sample console output: {noformat} $ hive --service hiveserver2 --deregister 0.14.0-SNAPSHOT SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/root/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/root/hive/lib/hive-jdbc-0.14.0-SNAPSHOT-standalone.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] {noformat} This will not change even if the znode did not exist. Ideally we should print some feedback after the command completes like HiveServer2 with version '0.14.0-SNAPSHOT' deregistered successfully or in case of failure an appropriate reason No HiveServer2 with version '0.14.0-SNAPSHOT' exists to deregister. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9272) Tests for utf-8 support
[ https://issues.apache.org/jira/browse/HIVE-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-9272: - Description: Including some test cases for utf8 support in webhcat. The first four tests invoke hive, pig, mapred and streaming apis for testing the utf8 support for data processed, file names and job name. The last test case tests the filtering of job name with utf8 character NO PRECOMMIT TESTS was:Including some test cases for utf8 support in webhcat. The first four tests invoke hive, pig, mapred and streaming apis for testing the utf8 support for data processed, file names and job name. The last test case tests the filtering of job name with utf8 character We should ignore the above failures. Add the flag to skip pre-commit tests as these are changes in the E2E test suite that doesn't get run as part of pre-commit tests. Tests for utf-8 support --- Key: HIVE-9272 URL: https://issues.apache.org/jira/browse/HIVE-9272 Project: Hive Issue Type: Test Components: Tests, WebHCat Reporter: Aswathy Chellammal Sreekumar Assignee: Aswathy Chellammal Sreekumar Priority: Minor Attachments: HIVE-9272.1.patch, HIVE-9272.2.patch, HIVE-9272.patch Including some test cases for utf8 support in webhcat. The first four tests invoke hive, pig, mapred and streaming apis for testing the utf8 support for data processed, file names and job name. The last test case tests the filtering of job name with utf8 character NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-9316) TestSqoop tests in WebHCat testsuite hardcode libdir path to hdfs
[ https://issues.apache.org/jira/browse/HIVE-9316?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14271972#comment-14271972 ] Deepesh Khandelwal commented on HIVE-9316: -- Thanks [~ekoifman] for the review and commit! TestSqoop tests in WebHCat testsuite hardcode libdir path to hdfs - Key: HIVE-9316 URL: https://issues.apache.org/jira/browse/HIVE-9316 Project: Hive Issue Type: Bug Components: Tests, WebHCat Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Priority: Minor Fix For: 0.15.0 Attachments: HIVE-9316.1.patch Currently the TestSqoop tests in WebHCat Perl based testsuite has hdfs:// prefix in the jdbc jar path in libdir, we should remove this to enable it to run against other file systems. NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-9316) TestSqoop tests in WebHCat testsuite hardcode libdir path to hdfs
Deepesh Khandelwal created HIVE-9316: Summary: TestSqoop tests in WebHCat testsuite hardcode libdir path to hdfs Key: HIVE-9316 URL: https://issues.apache.org/jira/browse/HIVE-9316 Project: Hive Issue Type: Bug Components: Tests, WebHCat Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Priority: Minor Fix For: 0.15.0 Currently the TestSqoop tests in WebHCat Perl based testsuite has hdfs:// prefix in the jdbc jar path in libdir, we should remove this to enable it to run against other file systems. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9316) TestSqoop tests in WebHCat testsuite hardcode libdir path to hdfs
[ https://issues.apache.org/jira/browse/HIVE-9316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-9316: - Attachment: HIVE-9316.1.patch Attaching the patch that removes the prefix. Please review. TestSqoop tests in WebHCat testsuite hardcode libdir path to hdfs - Key: HIVE-9316 URL: https://issues.apache.org/jira/browse/HIVE-9316 Project: Hive Issue Type: Bug Components: Tests, WebHCat Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Priority: Minor Fix For: 0.15.0 Attachments: HIVE-9316.1.patch Currently the TestSqoop tests in WebHCat Perl based testsuite has hdfs:// prefix in the jdbc jar path in libdir, we should remove this to enable it to run against other file systems. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9316) TestSqoop tests in WebHCat testsuite hardcode libdir path to hdfs
[ https://issues.apache.org/jira/browse/HIVE-9316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-9316: - Description: Currently the TestSqoop tests in WebHCat Perl based testsuite has hdfs:// prefix in the jdbc jar path in libdir, we should remove this to enable it to run against other file systems. NO PRECOMMIT TESTS was:Currently the TestSqoop tests in WebHCat Perl based testsuite has hdfs:// prefix in the jdbc jar path in libdir, we should remove this to enable it to run against other file systems. TestSqoop tests in WebHCat testsuite hardcode libdir path to hdfs - Key: HIVE-9316 URL: https://issues.apache.org/jira/browse/HIVE-9316 Project: Hive Issue Type: Bug Components: Tests, WebHCat Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Priority: Minor Fix For: 0.15.0 Attachments: HIVE-9316.1.patch Currently the TestSqoop tests in WebHCat Perl based testsuite has hdfs:// prefix in the jdbc jar path in libdir, we should remove this to enable it to run against other file systems. NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-9316) TestSqoop tests in WebHCat testsuite hardcode libdir path to hdfs
[ https://issues.apache.org/jira/browse/HIVE-9316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-9316: - Status: Patch Available (was: Open) TestSqoop tests in WebHCat testsuite hardcode libdir path to hdfs - Key: HIVE-9316 URL: https://issues.apache.org/jira/browse/HIVE-9316 Project: Hive Issue Type: Bug Components: Tests, WebHCat Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Priority: Minor Fix For: 0.15.0 Attachments: HIVE-9316.1.patch Currently the TestSqoop tests in WebHCat Perl based testsuite has hdfs:// prefix in the jdbc jar path in libdir, we should remove this to enable it to run against other file systems. NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8901) increase retry attempt, interval on metastore database errors
[ https://issues.apache.org/jira/browse/HIVE-8901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14215508#comment-14215508 ] Deepesh Khandelwal commented on HIVE-8901: -- +1 increase retry attempt, interval on metastore database errors - Key: HIVE-8901 URL: https://issues.apache.org/jira/browse/HIVE-8901 Project: Hive Issue Type: Bug Reporter: Thejas M Nair Assignee: Thejas M Nair Fix For: 0.15.0 Attachments: HIVE-8901.1.patch The current defaults of hive.hmshandler.retry.attempts=1 and hive.hmshandler.retry.interval=1s are very small. In some cases one retry is not sufficient for successful command execution. I have seen cases, specially after metastore being idle for sometime, when it takes 2 attempts for the db action to succeed. In this case, it as a Communications link failure/connection lost error. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8748) jdbc uber jar is missing commons-logging
[ https://issues.apache.org/jira/browse/HIVE-8748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14199372#comment-14199372 ] Deepesh Khandelwal commented on HIVE-8748: -- +1 LGTM jdbc uber jar is missing commons-logging Key: HIVE-8748 URL: https://issues.apache.org/jira/browse/HIVE-8748 Project: Hive Issue Type: Improvement Components: JDBC Affects Versions: 0.14.0 Reporter: Ashutosh Chauhan Assignee: Ashutosh Chauhan Attachments: HIVE-8748.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8270) JDBC uber jar is missing some classes required in secure setup.
[ https://issues.apache.org/jira/browse/HIVE-8270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14150894#comment-14150894 ] Deepesh Khandelwal commented on HIVE-8270: -- +1 New size looks acceptable. Were you able to test the different HiveServer2 combinations? JDBC uber jar is missing some classes required in secure setup. --- Key: HIVE-8270 URL: https://issues.apache.org/jira/browse/HIVE-8270 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.14.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Fix For: 0.14.0 Attachments: HIVE-8270.1.patch JDBC uber jar is missing some required classes for a secure setup. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8239) MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
[ https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14147112#comment-14147112 ] Deepesh Khandelwal commented on HIVE-8239: -- Thanks [~alangates] for the review and commit! MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables - Key: HIVE-8239 URL: https://issues.apache.org/jira/browse/HIVE-8239 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-8239.1.patch In Transaction related tables, Java long column fields are mapped to int which results in failure as shown: {noformat} 2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1') 2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback 2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197) at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246) at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83) at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488) at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775) at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676) at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615) at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633) at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255) ... {noformat} In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead. NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8239) MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
Deepesh Khandelwal created HIVE-8239: Summary: MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables Key: HIVE-8239 URL: https://issues.apache.org/jira/browse/HIVE-8239 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal In Transaction related tables, Java long column fields are mapped to int which results in failure as shown: {noformat} 2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1') 2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback 2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197) at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246) at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83) at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488) at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775) at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676) at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615) at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633) at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255) ... {noformat} In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8239) MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
[ https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8239: - Description: In Transaction related tables, Java long column fields are mapped to int which results in failure as shown: {noformat} 2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1') 2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback 2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197) at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246) at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83) at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488) at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775) at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676) at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615) at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633) at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255) ... {noformat} In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead. NO PRECOMMIT TESTS was: In Transaction related tables, Java long column fields are mapped to int which results in failure as shown: {noformat} 2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1') 2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback 2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197) at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246) at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83) at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488) at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775) at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676) at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615) at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633) at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244) at
[jira] [Updated] (HIVE-8239) MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
[ https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8239: - Attachment: HIVE-8239.1.patch Attaching a patch for review that changes the affected column datatypes from int to bigint. MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables - Key: HIVE-8239 URL: https://issues.apache.org/jira/browse/HIVE-8239 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Attachments: HIVE-8239.1.patch In Transaction related tables, Java long column fields are mapped to int which results in failure as shown: {noformat} 2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1') 2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback 2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197) at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246) at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83) at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488) at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775) at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676) at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615) at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633) at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255) ... {noformat} In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8239) MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
[ https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8239: - Fix Version/s: 0.14.0 Status: Patch Available (was: Open) MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables - Key: HIVE-8239 URL: https://issues.apache.org/jira/browse/HIVE-8239 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-8239.1.patch In Transaction related tables, Java long column fields are mapped to int which results in failure as shown: {noformat} 2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1') 2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback 2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197) at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246) at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83) at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488) at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775) at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676) at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615) at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633) at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255) ... {noformat} In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead. NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8239) MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables
[ https://issues.apache.org/jira/browse/HIVE-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14145649#comment-14145649 ] Deepesh Khandelwal commented on HIVE-8239: -- I left that out as the composite hive-schema-0.14.0.mssql.sql includes those tables. MSSQL upgrade schema scripts does not map Java long datatype columns correctly for transaction related tables - Key: HIVE-8239 URL: https://issues.apache.org/jira/browse/HIVE-8239 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-8239.1.patch In Transaction related tables, Java long column fields are mapped to int which results in failure as shown: {noformat} 2014-09-23 18:08:00,030 DEBUG txn.TxnHandler (TxnHandler.java:lock(1243)) - Going to execute update insert into HIVE_LOCKS (hl_lock_ext_id, hl_lock_int_id, hl_txnid, hl_db, hl_table, hl_partition, hl_lock_state, hl_lock_type, hl_last_heartbeat, hl_user, hl_host) values (28, 1,0, 'default', null, null, 'w', 'r', 1411495679547, 'hadoopqa', 'onprem-sqoop1') 2014-09-23 18:08:00,033 DEBUG txn.TxnHandler (TxnHandler.java:lock(406)) - Going to rollback 2014-09-23 18:08:00,045 ERROR metastore.RetryingHMSHandler (RetryingHMSHandler.java:invoke(139)) - org.apache.thrift.TException: MetaException(message:Unable to update transaction database com.microsoft.sqlserver.jdbc.SQLServerException: Arithmetic overflow error converting expression to data type int. at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:197) at com.microsoft.sqlserver.jdbc.TDSTokenHandler.onEOF(tdsparser.java:246) at com.microsoft.sqlserver.jdbc.TDSParser.parse(tdsparser.java:83) at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1488) at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:775) at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:676) at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:4615) at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1400) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:179) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:154) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:633) at com.jolbox.bonecp.StatementHandle.executeUpdate(StatementHandle.java:497) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:1244) at org.apache.hadoop.hive.metastore.txn.TxnHandler.lock(TxnHandler.java:403) at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.lock(HiveMetaStore.java:5255) ... {noformat} In this query one of the column HL_LAST_HEARTBEAT defined as int datatype in HIVE_LOCKS is trying to take in a long value (1411495679547) and throws the error. We should use bigint as column type instead. NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8185) hive-jdbc-0.14.0-SNAPSHOT-standalone.jar fails verification for signatures in build
[ https://issues.apache.org/jira/browse/HIVE-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14142158#comment-14142158 ] Deepesh Khandelwal commented on HIVE-8185: -- Thanks Gopal and Ashutosh! hive-jdbc-0.14.0-SNAPSHOT-standalone.jar fails verification for signatures in build --- Key: HIVE-8185 URL: https://issues.apache.org/jira/browse/HIVE-8185 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.14.0 Reporter: Gopal V Assignee: Deepesh Khandelwal Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8185.1.patch, HIVE-8185.2.patch In the current build, running {code} jarsigner --verify ./lib/hive-jdbc-0.14.0-SNAPSHOT-standalone.jar Jar verification failed. {code} unless that jar is removed from the lib dir, all hive queries throw the following error {code} Exception in thread main java.lang.SecurityException: Invalid signature file digest for Manifest main attributes at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:240) at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:193) at java.util.jar.JarVerifier.processEntry(JarVerifier.java:305) at java.util.jar.JarVerifier.update(JarVerifier.java:216) at java.util.jar.JarFile.initializeVerifier(JarFile.java:345) at java.util.jar.JarFile.getInputStream(JarFile.java:412) at sun.misc.URLClassPath$JarLoader$2.getInputStream(URLClassPath.java:775) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8185) hive-jdbc-0.14.0-SNAPSHOT-standalone.jar fails verification for signatures in build
[ https://issues.apache.org/jira/browse/HIVE-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8185: - Assignee: Deepesh Khandelwal hive-jdbc-0.14.0-SNAPSHOT-standalone.jar fails verification for signatures in build --- Key: HIVE-8185 URL: https://issues.apache.org/jira/browse/HIVE-8185 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.14.0 Reporter: Gopal V Assignee: Deepesh Khandelwal Priority: Critical Attachments: HIVE-8185.1.patch, HIVE-8185.2.patch In the current build, running {code} jarsigner --verify ./lib/hive-jdbc-0.14.0-SNAPSHOT-standalone.jar Jar verification failed. {code} unless that jar is removed from the lib dir, all hive queries throw the following error {code} Exception in thread main java.lang.SecurityException: Invalid signature file digest for Manifest main attributes at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:240) at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:193) at java.util.jar.JarVerifier.processEntry(JarVerifier.java:305) at java.util.jar.JarVerifier.update(JarVerifier.java:216) at java.util.jar.JarFile.initializeVerifier(JarFile.java:345) at java.util.jar.JarFile.getInputStream(JarFile.java:412) at sun.misc.URLClassPath$JarLoader$2.getInputStream(URLClassPath.java:775) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8200) Make beeline use the hive-jdbc standalone jar
Deepesh Khandelwal created HIVE-8200: Summary: Make beeline use the hive-jdbc standalone jar Key: HIVE-8200 URL: https://issues.apache.org/jira/browse/HIVE-8200 Project: Hive Issue Type: Bug Components: CLI, HiveServer2 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Hiveserver2 JDBC client beeline currently generously includes all the jars under $HIVE_HOME/lib in its invocation. With the fix from HIVE-8129 it should only need a few. This will be a good validation of the hive-jdbc standalone jar. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8200) Make beeline use the hive-jdbc standalone jar
[ https://issues.apache.org/jira/browse/HIVE-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8200: - Attachment: HIVE-8200.1.patch Attaching a patch for review. With this patch the CLASSPATH now has only the following jars: {noformat} $HIVE_HOME/lib/hive-beeline-*.jar $HIVE_HOME/lib/super-csv-*.jar $HIVE_HOME/lib/jline-*.jar $HIVE_HOME/lib/hive-jdbc-*-standalone.jar {noformat} Make beeline use the hive-jdbc standalone jar - Key: HIVE-8200 URL: https://issues.apache.org/jira/browse/HIVE-8200 Project: Hive Issue Type: Bug Components: CLI, HiveServer2 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Attachments: HIVE-8200.1.patch Hiveserver2 JDBC client beeline currently generously includes all the jars under $HIVE_HOME/lib in its invocation. With the fix from HIVE-8129 it should only need a few. This will be a good validation of the hive-jdbc standalone jar. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8185) hive-jdbc-0.14.0-SNAPSHOT-standalone.jar fails verification for signatures in build
[ https://issues.apache.org/jira/browse/HIVE-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8185: - Attachment: HIVE-8185.1.patch Potential patch, can someone review? The problem here existed from HIVE-538 which had the change to produce the artifact. HIVE-8126 was just to include it in the lib directory of distribution. The issue is that there the uber jar was built from classes from some signed jars and those two additional files (META-INF/DUMMY.SF, META-INF/DUMMY.DSA) were picked up from one of those. We need to explicitly exclude them and that is what the patch intends to do. hive-jdbc-0.14.0-SNAPSHOT-standalone.jar fails verification for signatures in build --- Key: HIVE-8185 URL: https://issues.apache.org/jira/browse/HIVE-8185 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.14.0 Reporter: Gopal V Priority: Critical Attachments: HIVE-8185.1.patch In the current build, running {code} jarsigner --verify ./lib/hive-jdbc-0.14.0-SNAPSHOT-standalone.jar Jar verification failed. {code} unless that jar is removed from the lib dir, all hive queries throw the following error {code} Exception in thread main java.lang.SecurityException: Invalid signature file digest for Manifest main attributes at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:240) at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:193) at java.util.jar.JarVerifier.processEntry(JarVerifier.java:305) at java.util.jar.JarVerifier.update(JarVerifier.java:216) at java.util.jar.JarFile.initializeVerifier(JarFile.java:345) at java.util.jar.JarFile.getInputStream(JarFile.java:412) at sun.misc.URLClassPath$JarLoader$2.getInputStream(URLClassPath.java:775) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8185) hive-jdbc-0.14.0-SNAPSHOT-standalone.jar fails verification for signatures in build
[ https://issues.apache.org/jira/browse/HIVE-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8185: - Attachment: HIVE-8185.2.patch Uploading a new one that works reliably. hive-jdbc-0.14.0-SNAPSHOT-standalone.jar fails verification for signatures in build --- Key: HIVE-8185 URL: https://issues.apache.org/jira/browse/HIVE-8185 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.14.0 Reporter: Gopal V Priority: Critical Attachments: HIVE-8185.1.patch, HIVE-8185.2.patch In the current build, running {code} jarsigner --verify ./lib/hive-jdbc-0.14.0-SNAPSHOT-standalone.jar Jar verification failed. {code} unless that jar is removed from the lib dir, all hive queries throw the following error {code} Exception in thread main java.lang.SecurityException: Invalid signature file digest for Manifest main attributes at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:240) at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:193) at java.util.jar.JarVerifier.processEntry(JarVerifier.java:305) at java.util.jar.JarVerifier.update(JarVerifier.java:216) at java.util.jar.JarFile.initializeVerifier(JarFile.java:345) at java.util.jar.JarFile.getInputStream(JarFile.java:412) at sun.misc.URLClassPath$JarLoader$2.getInputStream(URLClassPath.java:775) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8185) hive-jdbc-0.14.0-SNAPSHOT-standalone.jar fails verification for signatures in build
[ https://issues.apache.org/jira/browse/HIVE-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8185: - Status: Patch Available (was: Open) hive-jdbc-0.14.0-SNAPSHOT-standalone.jar fails verification for signatures in build --- Key: HIVE-8185 URL: https://issues.apache.org/jira/browse/HIVE-8185 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.14.0 Reporter: Gopal V Priority: Critical Attachments: HIVE-8185.1.patch, HIVE-8185.2.patch In the current build, running {code} jarsigner --verify ./lib/hive-jdbc-0.14.0-SNAPSHOT-standalone.jar Jar verification failed. {code} unless that jar is removed from the lib dir, all hive queries throw the following error {code} Exception in thread main java.lang.SecurityException: Invalid signature file digest for Manifest main attributes at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:240) at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:193) at java.util.jar.JarVerifier.processEntry(JarVerifier.java:305) at java.util.jar.JarVerifier.update(JarVerifier.java:216) at java.util.jar.JarFile.initializeVerifier(JarFile.java:345) at java.util.jar.JarFile.getInputStream(JarFile.java:412) at sun.misc.URLClassPath$JarLoader$2.getInputStream(URLClassPath.java:775) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8185) hive-jdbc-0.14.0-SNAPSHOT-standalone.jar fails verification for signatures in build
[ https://issues.apache.org/jira/browse/HIVE-8185?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8185: - Status: Open (was: Patch Available) Uploading a new one that works reliably. hive-jdbc-0.14.0-SNAPSHOT-standalone.jar fails verification for signatures in build --- Key: HIVE-8185 URL: https://issues.apache.org/jira/browse/HIVE-8185 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.14.0 Reporter: Gopal V Priority: Critical Attachments: HIVE-8185.1.patch, HIVE-8185.2.patch In the current build, running {code} jarsigner --verify ./lib/hive-jdbc-0.14.0-SNAPSHOT-standalone.jar Jar verification failed. {code} unless that jar is removed from the lib dir, all hive queries throw the following error {code} Exception in thread main java.lang.SecurityException: Invalid signature file digest for Manifest main attributes at sun.security.util.SignatureFileVerifier.processImpl(SignatureFileVerifier.java:240) at sun.security.util.SignatureFileVerifier.process(SignatureFileVerifier.java:193) at java.util.jar.JarVerifier.processEntry(JarVerifier.java:305) at java.util.jar.JarVerifier.update(JarVerifier.java:216) at java.util.jar.JarFile.initializeVerifier(JarFile.java:345) at java.util.jar.JarFile.getInputStream(JarFile.java:412) at sun.misc.URLClassPath$JarLoader$2.getInputStream(URLClassPath.java:775) {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8167) mvn install command broken by HIVE-8126 commit
Deepesh Khandelwal created HIVE-8167: Summary: mvn install command broken by HIVE-8126 commit Key: HIVE-8167 URL: https://issues.apache.org/jira/browse/HIVE-8167 Project: Hive Issue Type: Bug Components: Build Infrastructure, JDBC Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Priority: Critical HIVE-8126 broke the mvn install command by referencing a jar that got removed as part of HIVE-8126 command. {noformat} [INFO] Installing ~/hive/packaging/target/apache-hive-0.14.0-SNAPSHOT-jdbc.jar to ~/.m2/repository/org/apache/hive/hive-packaging/0.14.0-SNAPSHOT/hive-packaging-0.14.0-SNAPSHOT-standalone.jar {noformat} The file ~/hive/packaging/target/apache-hive-0.14.0-SNAPSHOT-jdbc.jar doesn't get created right now. Basically a copy goal for the above jar needs to be revived for this command to succeed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8167) mvn install command broken by HIVE-8126 commit
[ https://issues.apache.org/jira/browse/HIVE-8167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8167: - Attachment: HIVE-8167.1.patch Attaching a patch for review. mvn install command broken by HIVE-8126 commit -- Key: HIVE-8167 URL: https://issues.apache.org/jira/browse/HIVE-8167 Project: Hive Issue Type: Bug Components: Build Infrastructure, JDBC Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Priority: Critical Attachments: HIVE-8167.1.patch HIVE-8126 broke the mvn install command by referencing a jar that got removed as part of HIVE-8126 command. {noformat} [INFO] Installing ~/hive/packaging/target/apache-hive-0.14.0-SNAPSHOT-jdbc.jar to ~/.m2/repository/org/apache/hive/hive-packaging/0.14.0-SNAPSHOT/hive-packaging-0.14.0-SNAPSHOT-standalone.jar {noformat} The file ~/hive/packaging/target/apache-hive-0.14.0-SNAPSHOT-jdbc.jar doesn't get created right now. Basically a copy goal for the above jar needs to be revived for this command to succeed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8167) mvn install command broken by HIVE-8126 commit
[ https://issues.apache.org/jira/browse/HIVE-8167?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8167: - Fix Version/s: 0.14.0 Status: Patch Available (was: Open) mvn install command broken by HIVE-8126 commit -- Key: HIVE-8167 URL: https://issues.apache.org/jira/browse/HIVE-8167 Project: Hive Issue Type: Bug Components: Build Infrastructure, JDBC Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8167.1.patch HIVE-8126 broke the mvn install command by referencing a jar that got removed as part of HIVE-8126 command. {noformat} [INFO] Installing ~/hive/packaging/target/apache-hive-0.14.0-SNAPSHOT-jdbc.jar to ~/.m2/repository/org/apache/hive/hive-packaging/0.14.0-SNAPSHOT/hive-packaging-0.14.0-SNAPSHOT-standalone.jar {noformat} The file ~/hive/packaging/target/apache-hive-0.14.0-SNAPSHOT-jdbc.jar doesn't get created right now. Basically a copy goal for the above jar needs to be revived for this command to succeed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8170) Hive Metastore schema script missing for mssql for v0.14.0
Deepesh Khandelwal created HIVE-8170: Summary: Hive Metastore schema script missing for mssql for v0.14.0 Key: HIVE-8170 URL: https://issues.apache.org/jira/browse/HIVE-8170 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Hive Metastore schema script for Hive v0.14.0 is missing for MSSQL DB. This will break tools like schematool which relies on this to be present. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8170) Hive Metastore schema script missing for mssql for v0.14.0
[ https://issues.apache.org/jira/browse/HIVE-8170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8170: - Attachment: HIVE-8170.1.patch Attaching a patch for review. Hive Metastore schema script missing for mssql for v0.14.0 -- Key: HIVE-8170 URL: https://issues.apache.org/jira/browse/HIVE-8170 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Attachments: HIVE-8170.1.patch Hive Metastore schema script for Hive v0.14.0 is missing for MSSQL DB. This will break tools like schematool which relies on this to be present. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8175) Hive metastore upgrade script for Oracle is missing an upgrade step
Deepesh Khandelwal created HIVE-8175: Summary: Hive metastore upgrade script for Oracle is missing an upgrade step Key: HIVE-8175 URL: https://issues.apache.org/jira/browse/HIVE-8175 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Hive metastore upgrade script from 0.13.0 to 0.14.0 for Oracle is missing the required upgrade step from HIVE-7118. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8175) Hive metastore upgrade script for Oracle is missing an upgrade step
[ https://issues.apache.org/jira/browse/HIVE-8175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8175: - Attachment: HIVE-8175.1.patch Attaching the patch for review. Hive metastore upgrade script for Oracle is missing an upgrade step --- Key: HIVE-8175 URL: https://issues.apache.org/jira/browse/HIVE-8175 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Attachments: HIVE-8175.1.patch Hive metastore upgrade script from 0.13.0 to 0.14.0 for Oracle is missing the required upgrade step from HIVE-7118. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8175) Hive metastore upgrade from v0.13.0 to v0.14.0 script for Oracle is missing an upgrade step
[ https://issues.apache.org/jira/browse/HIVE-8175?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8175: - Summary: Hive metastore upgrade from v0.13.0 to v0.14.0 script for Oracle is missing an upgrade step (was: Hive metastore upgrade script for Oracle is missing an upgrade step) Hive metastore upgrade from v0.13.0 to v0.14.0 script for Oracle is missing an upgrade step --- Key: HIVE-8175 URL: https://issues.apache.org/jira/browse/HIVE-8175 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Attachments: HIVE-8175.1.patch Hive metastore upgrade script from 0.13.0 to 0.14.0 for Oracle is missing the required upgrade step from HIVE-7118. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8126) Standalone hive-jdbc jar is not packaged in the Hive distribution
[ https://issues.apache.org/jira/browse/HIVE-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8126: - Attachment: HIVE-8126.1.patch Attaching a patch for review. With the fix: {noformat} $ find . -name hive-jdbc-*.jar ./jdbc/target/hive-jdbc-0.14.0-SNAPSHOT-standalone.jar ./jdbc/target/hive-jdbc-0.14.0-SNAPSHOT.jar ./packaging/target/apache-hive-0.14.0-SNAPSHOT-bin/apache-hive-0.14.0-SNAPSHOT-bin/lib/hive-jdbc-0.14.0-SNAPSHOT-standalone.jar ./packaging/target/apache-hive-0.14.0-SNAPSHOT-bin/apache-hive-0.14.0-SNAPSHOT-bin/lib/hive-jdbc-0.14.0-SNAPSHOT.jar {noformat} Standalone hive-jdbc jar is not packaged in the Hive distribution - Key: HIVE-8126 URL: https://issues.apache.org/jira/browse/HIVE-8126 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-8126.1.patch With HIVE-538 we started creating the hive-jdbc-*-standalone.jar but the packaging/distribution does not contain the standalone jdbc jar. I would have expected it to locate under the lib folder of the distribution. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8126) Standalone hive-jdbc jar is not packaged in the Hive distribution
[ https://issues.apache.org/jira/browse/HIVE-8126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-8126: - Status: Patch Available (was: Open) Standalone hive-jdbc jar is not packaged in the Hive distribution - Key: HIVE-8126 URL: https://issues.apache.org/jira/browse/HIVE-8126 Project: Hive Issue Type: Bug Components: JDBC Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-8126.1.patch With HIVE-538 we started creating the hive-jdbc-*-standalone.jar but the packaging/distribution does not contain the standalone jdbc jar. I would have expected it to locate under the lib folder of the distribution. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8062) Stats collection for columns fails on a partitioned table with null values in partitioning column
Deepesh Khandelwal created HIVE-8062: Summary: Stats collection for columns fails on a partitioned table with null values in partitioning column Key: HIVE-8062 URL: https://issues.apache.org/jira/browse/HIVE-8062 Project: Hive Issue Type: Bug Components: Statistics Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Steps to reproduce: 1. Create a data file abc.txt with the following contents: {noformat} a,1 b, {noformat} 2. Use the Hive CLI to create and load the partitioned table: {noformat} hive create table abc(a string, b int); OK Time taken: 0.272 seconds hive load data local inpath 'abc.txt' into table abc; Loading data to table default.abc Table default.abc stats: [numFiles=1, numRows=0, totalSize=7, rawDataSize=0] OK Time taken: 0.463 seconds hive create table abc1(a string) partitioned by (b int); OK Time taken: 0.098 seconds hive set hive.exec.dynamic.partition.mode=nonstrict; hive insert overwrite table abc1 partition (b) select a, b from abc; Query ID = hrt_qa_20140911210909_1200fae7-1e18-4e0d-b74f-040453c27cff Total jobs = 1 Launching Job 1 out of 1 Status: Running (application id: Executing on YARN cluster with App id application_1410457588978_0063) Map 1: -/- Reducer 2: 0/1 Map 1: 0/1 Reducer 2: 0/1 Map 1: 0(+1)/1 Reducer 2: 0/1 Map 1: 1/1 Reducer 2: 0(+1)/1 Map 1: 1/1 Reducer 2: 0/1 Map 1: 1/1 Reducer 2: 1/1 Status: Finished successfully Loading data to table default.abc1 partition (b=null) Loading partition {b=__HIVE_DEFAULT_PARTITION__} Partition default.abc1{b=__HIVE_DEFAULT_PARTITION__} stats: [numFiles=1, numRows=2, totalSize=7, rawDataSize=5] OK Time taken: 7.49 seconds {noformat} 3. Now run the analyze statistics command for columns: {noformat} hive analyze table abc1 partition (b) compute statistics for columns; Query ID = hrt_qa_20140911211010_440bdb4a-6a0d-496b-9d2e-5fc84db3d0ee Total jobs = 1 Launching Job 1 out of 1 Status: Running (application id: Executing on YARN cluster with App id application_1410457588978_0063) Map 1: 0(+1)/1 Reducer 2: 0/1 Map 1: 1/1 Reducer 2: 0(+1)/1 Map 1: 1/1 Reducer 2: 1/1 Status: Finished successfully FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.ColumnStatsTask {noformat} The analyze statistics for columns fails. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-7947) Add message at the end of each testcase with timestamp in Webhcat system tests
[ https://issues.apache.org/jira/browse/HIVE-7947?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14119003#comment-14119003 ] Deepesh Khandelwal commented on HIVE-7947: -- Simple yet useful. +1 Add message at the end of each testcase with timestamp in Webhcat system tests -- Key: HIVE-7947 URL: https://issues.apache.org/jira/browse/HIVE-7947 Project: Hive Issue Type: Improvement Components: Tests, WebHCat Reporter: Jagruti Varia Assignee: Jagruti Varia Priority: Trivial Fix For: 0.14.0 Attachments: HIVE-7947.1.patch Currently, Webhcat e2e testsuite only prints message while starting test run: {noformat} Beginning test testcase at 1406716992 {noformat} It should also print ending message with timestamp similar to this: {noformat} Ending test testcase at 1406717992 {noformat} This change will make log collection easy for failed test cases. NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-7693) Invalid column ref error in order by when using column alias in select clause and using having
Deepesh Khandelwal created HIVE-7693: Summary: Invalid column ref error in order by when using column alias in select clause and using having Key: HIVE-7693 URL: https://issues.apache.org/jira/browse/HIVE-7693 Project: Hive Issue Type: Bug Components: Query Processor Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Hive CLI session: {noformat} hive create table abc(foo int, bar string); OK Time taken: 0.633 seconds hive select foo as c0, count(*) as c1 from abc group by foo, bar having bar like '%abc%' order by foo; FAILED: SemanticException [Error 10004]: Line 1:93 Invalid table alias or column reference 'foo': (possible column names are: c0, c1) {noformat} Without having clause, the query runs fine, example: {code} select foo as c0, count(*) as c1 from abc group by foo, bar order by foo; {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14068956#comment-14068956 ] Deepesh Khandelwal commented on HIVE-7054: -- Thanks [~jnp] for the review and [~ashutoshc] for the commit! Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.3.patch, HIVE-7054.4.patch, HIVE-7054.5.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Status: Open (was: Patch Available) Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.3.patch, HIVE-7054.4.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Attachment: HIVE-7054.5.patch Thanks Jitendra for the patch review. Attaching a new patch with the changes suggested. Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.3.patch, HIVE-7054.4.patch, HIVE-7054.5.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Status: Patch Available (was: Open) Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.3.patch, HIVE-7054.4.patch, HIVE-7054.5.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14060910#comment-14060910 ] Deepesh Khandelwal commented on HIVE-7054: -- The failed test doesn't seem to be related to my change. Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.3.patch, HIVE-7054.4.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Attachment: HIVE-7054.4.patch Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.3.patch, HIVE-7054.4.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Status: Open (was: Patch Available) Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.3.patch, HIVE-7054.4.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Status: Patch Available (was: Open) Fixed the failure in test org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_vector_elt I don't think the other two are related to my changes. Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.3.patch, HIVE-7054.4.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7287) hive --rcfilecat command is broken on Windows
[ https://issues.apache.org/jira/browse/HIVE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050684#comment-14050684 ] Deepesh Khandelwal commented on HIVE-7287: -- Thanks Jason for the review and commit! hive --rcfilecat command is broken on Windows - Key: HIVE-7287 URL: https://issues.apache.org/jira/browse/HIVE-7287 Project: Hive Issue Type: Bug Components: CLI, Windows Affects Versions: 0.13.0 Environment: Windows Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7287.1.patch {noformat} c:\ hive --rcfilecat --file-sizes --column-sizes-pretty /tmp/all100krc Not a valid JAR: C:\org.apache.hadoop.hive.cli.RCFileCat {noformat} NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-5789) WebHCat E2E tests do not launch on Windows
[ https://issues.apache.org/jira/browse/HIVE-5789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14050689#comment-14050689 ] Deepesh Khandelwal commented on HIVE-5789: -- Thanks Eugene and Thejas! WebHCat E2E tests do not launch on Windows -- Key: HIVE-5789 URL: https://issues.apache.org/jira/browse/HIVE-5789 Project: Hive Issue Type: Bug Components: Testing Infrastructure Affects Versions: 0.12.0 Environment: Windows Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-5789.patch There are some assumptions in the build.xml invoking the perl script for running tests that makes them unsuitable for non-UNIX environments. NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7339) hive --orcfiledump command is not supported on Windows
Deepesh Khandelwal created HIVE-7339: Summary: hive --orcfiledump command is not supported on Windows Key: HIVE-7339 URL: https://issues.apache.org/jira/browse/HIVE-7339 Project: Hive Issue Type: Bug Components: CLI, Windows Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal On Linux orcfiledump utility can be run using {noformat} hive --orcfiledump path_to_orc_file hive --service orcfiledump path_to_orc_file {noformat} Hive CLI utility on windows doesn't support the option. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7339) hive --orcfiledump command is not supported on Windows
[ https://issues.apache.org/jira/browse/HIVE-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7339: - Attachment: HIVE-7339.1.patch Attaching a patch that provides support on Windows. hive --orcfiledump command is not supported on Windows -- Key: HIVE-7339 URL: https://issues.apache.org/jira/browse/HIVE-7339 Project: Hive Issue Type: Bug Components: CLI, Windows Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Attachments: HIVE-7339.1.patch On Linux orcfiledump utility can be run using {noformat} hive --orcfiledump path_to_orc_file hive --service orcfiledump path_to_orc_file {noformat} Hive CLI utility on windows doesn't support the option. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7339) hive --orcfiledump command is not supported on Windows
[ https://issues.apache.org/jira/browse/HIVE-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7339: - Description: On Linux orcfiledump utility can be run using {noformat} hive --orcfiledump path_to_orc_file hive --service orcfiledump path_to_orc_file {noformat} Hive CLI utility on windows doesn't support the option. NO PRECOMMIT TESTS was: On Linux orcfiledump utility can be run using {noformat} hive --orcfiledump path_to_orc_file hive --service orcfiledump path_to_orc_file {noformat} Hive CLI utility on windows doesn't support the option. hive --orcfiledump command is not supported on Windows -- Key: HIVE-7339 URL: https://issues.apache.org/jira/browse/HIVE-7339 Project: Hive Issue Type: Bug Components: CLI, Windows Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Attachments: HIVE-7339.1.patch On Linux orcfiledump utility can be run using {noformat} hive --orcfiledump path_to_orc_file hive --service orcfiledump path_to_orc_file {noformat} Hive CLI utility on windows doesn't support the option. NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7339) hive --orcfiledump command is not supported on Windows
[ https://issues.apache.org/jira/browse/HIVE-7339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7339: - Status: Patch Available (was: Open) hive --orcfiledump command is not supported on Windows -- Key: HIVE-7339 URL: https://issues.apache.org/jira/browse/HIVE-7339 Project: Hive Issue Type: Bug Components: CLI, Windows Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Attachments: HIVE-7339.1.patch On Linux orcfiledump utility can be run using {noformat} hive --orcfiledump path_to_orc_file hive --service orcfiledump path_to_orc_file {noformat} Hive CLI utility on windows doesn't support the option. NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7118) Oracle upgrade schema scripts do not map Java long datatype columns correctly for transaction related tables
[ https://issues.apache.org/jira/browse/HIVE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042475#comment-14042475 ] Deepesh Khandelwal commented on HIVE-7118: -- Thanks Alan for review and commit! Oracle upgrade schema scripts do not map Java long datatype columns correctly for transaction related tables Key: HIVE-7118 URL: https://issues.apache.org/jira/browse/HIVE-7118 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.14.0 Environment: Oracle DB Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7118-0.13.0.1.patch, HIVE-7118.1.patch In Transaction related tables, Java long column fields are mapped to NUMBER(10) which results in failure to persist the transaction ids which are incompatible. Following error is seen: {noformat} ORA-01438: value larger than specified precision allowed for this column {noformat} NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-6564) WebHCat E2E tests that launch MR jobs fail on check job completion timeout
[ https://issues.apache.org/jira/browse/HIVE-6564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14042477#comment-14042477 ] Deepesh Khandelwal commented on HIVE-6564: -- Thanks Eugene and Ashutosh! WebHCat E2E tests that launch MR jobs fail on check job completion timeout -- Key: HIVE-6564 URL: https://issues.apache.org/jira/browse/HIVE-6564 Project: Hive Issue Type: Bug Components: Tests, WebHCat Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-6564.2.patch, HIVE-6564.patch WebHCat E2E tests that fire off an MR job are not correctly being detected as complete so those tests are timing out. The problem is happening because of JSON module available through cpan which returns 1 or 0 instead of true or false. NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7287) hive --rcfilecat command is broken on Windows
Deepesh Khandelwal created HIVE-7287: Summary: hive --rcfilecat command is broken on Windows Key: HIVE-7287 URL: https://issues.apache.org/jira/browse/HIVE-7287 Project: Hive Issue Type: Bug Components: CLI, Windows Affects Versions: 0.13.0 Environment: Windows Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal {noformat} c:\ hive --rcfilecat --file-sizes --column-sizes-pretty /tmp/all100krc Not a valid JAR: C:\org.apache.hadoop.hive.cli.RCFileCat {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7287) hive --rcfilecat command is broken on Windows
[ https://issues.apache.org/jira/browse/HIVE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7287: - Attachment: HIVE-7287.1.patch Attaching a patch with the fix. Basically JAR needs to be set before calling the execHiveCmd script. execHiveCmd script in order executes a hadoop jar %JAR% %CLASS% ... command. hive --rcfilecat command is broken on Windows - Key: HIVE-7287 URL: https://issues.apache.org/jira/browse/HIVE-7287 Project: Hive Issue Type: Bug Components: CLI, Windows Affects Versions: 0.13.0 Environment: Windows Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Attachments: HIVE-7287.1.patch {noformat} c:\ hive --rcfilecat --file-sizes --column-sizes-pretty /tmp/all100krc Not a valid JAR: C:\org.apache.hadoop.hive.cli.RCFileCat {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7287) hive --rcfilecat command is broken on Windows
[ https://issues.apache.org/jira/browse/HIVE-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7287: - Description: {noformat} c:\ hive --rcfilecat --file-sizes --column-sizes-pretty /tmp/all100krc Not a valid JAR: C:\org.apache.hadoop.hive.cli.RCFileCat {noformat} NO PRECOMMIT TESTS was: {noformat} c:\ hive --rcfilecat --file-sizes --column-sizes-pretty /tmp/all100krc Not a valid JAR: C:\org.apache.hadoop.hive.cli.RCFileCat {noformat} hive --rcfilecat command is broken on Windows - Key: HIVE-7287 URL: https://issues.apache.org/jira/browse/HIVE-7287 Project: Hive Issue Type: Bug Components: CLI, Windows Affects Versions: 0.13.0 Environment: Windows Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Attachments: HIVE-7287.1.patch {noformat} c:\ hive --rcfilecat --file-sizes --column-sizes-pretty /tmp/all100krc Not a valid JAR: C:\org.apache.hadoop.hive.cli.RCFileCat {noformat} NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (HIVE-7118) Oracle upgrade schema scripts do not map Java long datatype columns correctly for transaction related tables
[ https://issues.apache.org/jira/browse/HIVE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14039106#comment-14039106 ] Deepesh Khandelwal commented on HIVE-7118: -- Its not clear to me as to where would the upgrade script (019-HIVE-7118.oracle.sql) be invoked from. It may not be desirable to call this from upgrade-0.12.0-to-0.13.0.oracle.sql script as people will miss it as they are already on 0.13. What do you think? Oracle upgrade schema scripts do not map Java long datatype columns correctly for transaction related tables Key: HIVE-7118 URL: https://issues.apache.org/jira/browse/HIVE-7118 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.14.0 Environment: Oracle DB Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7118.1.patch In Transaction related tables, Java long column fields are mapped to NUMBER(10) which results in failure to persist the transaction ids which are incompatible. Following error is seen: {noformat} ORA-01438: value larger than specified precision allowed for this column {noformat} NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7118) Oracle upgrade schema scripts do not map Java long datatype columns correctly for transaction related tables
[ https://issues.apache.org/jira/browse/HIVE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7118: - Attachment: HIVE-7118-0.13.0.1.patch I have attached a new patch which provides the following: 019-HIVE-7118.oracle.sql - Intended for users who are already on Hive 0.13.0. Will need to run the script manually against their existing hive metastore schema. hive-txn-schema-0.13.0.oracle.sql hive-schema-0.13.0.oracle.sql - For fresh installs. Oracle upgrade schema scripts do not map Java long datatype columns correctly for transaction related tables Key: HIVE-7118 URL: https://issues.apache.org/jira/browse/HIVE-7118 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.14.0 Environment: Oracle DB Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7118-0.13.0.1.patch, HIVE-7118.1.patch In Transaction related tables, Java long column fields are mapped to NUMBER(10) which results in failure to persist the transaction ids which are incompatible. Following error is seen: {noformat} ORA-01438: value larger than specified precision allowed for this column {noformat} NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7268) On Windows Hive jobs in Webhcat always run on default MR mode
Deepesh Khandelwal created HIVE-7268: Summary: On Windows Hive jobs in Webhcat always run on default MR mode Key: HIVE-7268 URL: https://issues.apache.org/jira/browse/HIVE-7268 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 On Windows fix from HIVE-7065 doesn't work as the templeton.cmd script does not include the Hive configuration directory in the classpath. So when hive.execution.engine property is set to tez in HIVE_CONF_DIR/hive-site.xml, webhcat doesn't see it and defaults it to mr. This prevents Hive jobs running from WebHCat to use the tez execution engine. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7268) On Windows Hive jobs in Webhcat always run on default MR mode
[ https://issues.apache.org/jira/browse/HIVE-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7268: - Attachment: HIVE-7268.1.patch Attaching a patch which adds HIVE_HOME/conf to the webhcat classpath on Windows. On Windows Hive jobs in Webhcat always run on default MR mode - Key: HIVE-7268 URL: https://issues.apache.org/jira/browse/HIVE-7268 Project: Hive Issue Type: Bug Components: WebHCat Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7268.1.patch On Windows fix from HIVE-7065 doesn't work as the templeton.cmd script does not include the Hive configuration directory in the classpath. So when hive.execution.engine property is set to tez in HIVE_CONF_DIR/hive-site.xml, webhcat doesn't see it and defaults it to mr. This prevents Hive jobs running from WebHCat to use the tez execution engine. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7118) Oracle upgrade schema scripts do not map Java long datatype columns correctly for transaction related tables
Deepesh Khandelwal created HIVE-7118: Summary: Oracle upgrade schema scripts do not map Java long datatype columns correctly for transaction related tables Key: HIVE-7118 URL: https://issues.apache.org/jira/browse/HIVE-7118 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.14.0 Environment: Oracle DB Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal In Transaction related tables, Java long column fields are mapped to NUMBER(10) which results in failure to persist the transaction ids which are incompatible. Following error is seen: {noformat} ORA-01438: value larger than specified precision allowed for this column {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7118) Oracle upgrade schema scripts do not map Java long datatype columns correctly for transaction related tables
[ https://issues.apache.org/jira/browse/HIVE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7118: - Attachment: HIVE-7118.1.patch Attaching a patch that changes NUMBER(10) to NUMBER(19) datatype for long columns for Oracle metastore schema upgrade scripts. Oracle upgrade schema scripts do not map Java long datatype columns correctly for transaction related tables Key: HIVE-7118 URL: https://issues.apache.org/jira/browse/HIVE-7118 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.14.0 Environment: Oracle DB Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Attachments: HIVE-7118.1.patch In Transaction related tables, Java long column fields are mapped to NUMBER(10) which results in failure to persist the transaction ids which are incompatible. Following error is seen: {noformat} ORA-01438: value larger than specified precision allowed for this column {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7118) Oracle upgrade schema scripts do not map Java long datatype columns correctly for transaction related tables
[ https://issues.apache.org/jira/browse/HIVE-7118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7118: - Fix Version/s: 0.14.0 Status: Patch Available (was: Open) Oracle upgrade schema scripts do not map Java long datatype columns correctly for transaction related tables Key: HIVE-7118 URL: https://issues.apache.org/jira/browse/HIVE-7118 Project: Hive Issue Type: Bug Components: Database/Schema Affects Versions: 0.14.0 Environment: Oracle DB Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7118.1.patch In Transaction related tables, Java long column fields are mapped to NUMBER(10) which results in failure to persist the transaction ids which are incompatible. Following error is seen: {noformat} ORA-01438: value larger than specified precision allowed for this column {noformat} NO PRECOMMIT TESTS -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Status: Open (was: Patch Available) Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Attachment: HIVE-7054.3.patch Attaching a patch that fixes the vectorized decimal cast and math regression. Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.3.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Status: Patch Available (was: Open) Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.3.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Attachment: HIVE-7054.2.patch Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Status: Patch Available (was: Open) Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Attachment: (was: HIVE-7054.2.patch) Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Status: Open (was: Patch Available) Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Attachment: HIVE-7054.2.patch Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Status: Patch Available (was: Open) Thanks [~rusanu] for quick review on the review board! Attached patch attempts to incorporate that. Also minor update to the qtest results output file. Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.2.patch, HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Status: Patch Available (was: Open) Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-7054) Support ELT UDF in vectorized mode
Deepesh Khandelwal created HIVE-7054: Summary: Support ELT UDF in vectorized mode Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-7054) Support ELT UDF in vectorized mode
[ https://issues.apache.org/jira/browse/HIVE-7054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-7054: - Attachment: HIVE-7054.patch Here is the review board entry: https://reviews.apache.org/r/21416/ Please review. Support ELT UDF in vectorized mode -- Key: HIVE-7054 URL: https://issues.apache.org/jira/browse/HIVE-7054 Project: Hive Issue Type: New Feature Components: Vectorization Affects Versions: 0.14.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Fix For: 0.14.0 Attachments: HIVE-7054.patch Implement support for ELT udf in vectorized execution mode. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6927) Add support for MSSQL in schematool
Deepesh Khandelwal created HIVE-6927: Summary: Add support for MSSQL in schematool Key: HIVE-6927 URL: https://issues.apache.org/jira/browse/HIVE-6927 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Schematool is the preferred way of initializing schema for Hive. Since HIVE-6862 provided the script for MSSQL it would be nice to add the support for it in schematool. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6927) Add support for MSSQL in schematool
[ https://issues.apache.org/jira/browse/HIVE-6927?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-6927: - Attachment: HIVE-6927.patch Attaching the patch for review. Add support for MSSQL in schematool --- Key: HIVE-6927 URL: https://issues.apache.org/jira/browse/HIVE-6927 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Attachments: HIVE-6927.patch Schematool is the preferred way of initializing schema for Hive. Since HIVE-6862 provided the script for MSSQL it would be nice to add the support for it in schematool. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (HIVE-6915) Hive Hbase queries fail on secure Tez cluster
Deepesh Khandelwal created HIVE-6915: Summary: Hive Hbase queries fail on secure Tez cluster Key: HIVE-6915 URL: https://issues.apache.org/jira/browse/HIVE-6915 Project: Hive Issue Type: Bug Components: Tez Affects Versions: 0.13.0 Environment: Kerberos secure Tez cluster Reporter: Deepesh Khandelwal Hive queries reading and writing to HBase are currently failing with the following exception in a secure Tez cluster: {noformat} 2014-04-14 13:47:05,644 FATAL [InputInitializer [Map 1] #0] org.apache.hadoop.ipc.RpcClient: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'. javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212) at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:152) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupSaslConnection(RpcClient.java:792) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.access$800(RpcClient.java:349) at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:918) at org.apache.hadoop.hbase.ipc.RpcClient$Connection$2.run(RpcClient.java:915) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.setupIOstreams(RpcClient.java:915) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.writeRequest(RpcClient.java:1065) at org.apache.hadoop.hbase.ipc.RpcClient$Connection.tracedWriteRequest(RpcClient.java:1032) at org.apache.hadoop.hbase.ipc.RpcClient.call(RpcClient.java:1474) at org.apache.hadoop.hbase.ipc.RpcClient.callBlockingMethod(RpcClient.java:1684) at org.apache.hadoop.hbase.ipc.RpcClient$BlockingRpcChannelImplementation.callBlockingMethod(RpcClient.java:1737) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:29288) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1562) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:87) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:84) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:121) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:97) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:90) at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:67) at org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512) at org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:60) at org.apache.hadoop.hbase.security.token.TokenUtil$3.run(TokenUtil.java:174) at org.apache.hadoop.hbase.security.token.TokenUtil$3.run(TokenUtil.java:172) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1557) at org.apache.hadoop.hbase.security.token.TokenUtil.obtainTokenForJob(TokenUtil.java:171) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.hbase.util.Methods.call(Methods.java:39) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.obtainAuthTokenForJob(User.java:334) at org.apache.hadoop.hbase.mapred.TableMapReduceUtil.initCredentials(TableMapReduceUtil.java:201) at org.apache.hadoop.hive.hbase.HiveHBaseTableInputFormat.getSplits(HiveHBaseTableInputFormat.java:415) at org.apache.hadoop.hive.ql.io.HiveInputFormat.addSplitsForGroup(HiveInputFormat.java:291) at org.apache.hadoop.hive.ql.io.HiveInputFormat.getSplits(HiveInputFormat.java:372) at org.apache.hadoop.mapred.split.TezGroupedSplitsInputFormat.getSplits(TezGroupedSplitsInputFormat.java:68)
[jira] [Created] (HIVE-6771) Update WebHCat E2E tests now that comments is reported correctly in describe table output
Deepesh Khandelwal created HIVE-6771: Summary: Update WebHCat E2E tests now that comments is reported correctly in describe table output Key: HIVE-6771 URL: https://issues.apache.org/jira/browse/HIVE-6771 Project: Hive Issue Type: Bug Components: Tests Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal HIVE-6681 corrected the comments in the describe table output, earlier it would show from deserializer in comments. Some WebHCat E2E tests are checking for the string from deserializer even overshadowing the actual comments. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6771) Update WebHCat E2E tests now that comments is reported correctly in describe table output
[ https://issues.apache.org/jira/browse/HIVE-6771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Deepesh Khandelwal updated HIVE-6771: - Attachment: HIVE-6771.patch Attaching a patch to correct the behavior in the 7 impacted tests. Update WebHCat E2E tests now that comments is reported correctly in describe table output --- Key: HIVE-6771 URL: https://issues.apache.org/jira/browse/HIVE-6771 Project: Hive Issue Type: Bug Components: Tests Affects Versions: 0.13.0 Reporter: Deepesh Khandelwal Assignee: Deepesh Khandelwal Attachments: HIVE-6771.patch HIVE-6681 corrected the comments in the describe table output, earlier it would show from deserializer in comments. Some WebHCat E2E tests are checking for the string from deserializer even overshadowing the actual comments. -- This message was sent by Atlassian JIRA (v6.2#6252)