[jira] [Commented] (HIVE-8324) Shim KerberosName (causes build failure on hadoop-1)
[ https://issues.apache.org/jira/browse/HIVE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157741#comment-14157741 ] Hive QA commented on HIVE-8324: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12672611/HIVE-8324.1.patch {color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 6541 tests executed *Failed tests:* {noformat} org.apache.hive.hcatalog.pig.TestHCatLoader.testColumnarStorePushdown[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testConvertBooleanToInt[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testGetInputBytes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testProjectionsBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadPartitionedBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadComplex[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testDynamicPartitioningMultiPartColsNoDataInDataNoSpec[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testEmptyStore[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testNoAlias[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testPartitionPublish[5] {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1095/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1095/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1095/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 14 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12672611 Shim KerberosName (causes build failure on hadoop-1) Key: HIVE-8324 URL: https://issues.apache.org/jira/browse/HIVE-8324 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Szehon Ho Assignee: Vaibhav Gumashta Priority: Blocker Fix For: 0.14.0 Attachments: HIVE-8324.1.patch Unfortunately even after HIVE-8265, there are still more compile failures. {code} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hive-service: Compilation failure: Compilation failure: [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[35,54] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: package org.apache.hadoop.security.authentication.util [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[241,7] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[241,43] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[252,7] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[252,43] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8324) Shim KerberosName (causes build failure on hadoop-1)
[ https://issues.apache.org/jira/browse/HIVE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157749#comment-14157749 ] Vaibhav Gumashta commented on HIVE-8324: I'll rebase this since HIVE-6799 is committed. Shim KerberosName (causes build failure on hadoop-1) Key: HIVE-8324 URL: https://issues.apache.org/jira/browse/HIVE-8324 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Szehon Ho Assignee: Vaibhav Gumashta Priority: Blocker Fix For: 0.14.0 Attachments: HIVE-8324.1.patch Unfortunately even after HIVE-8265, there are still more compile failures. {code} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hive-service: Compilation failure: Compilation failure: [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[35,54] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: package org.apache.hadoop.security.authentication.util [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[241,7] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[241,43] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[252,7] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[252,43] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arup Malakar updated HIVE-7787: --- Attachment: HIVE-7787.trunk.1.patch Looks like {{ArrayWritableGroupConverter}} enforces that the struct should have either 1 or 2 elements. I am not sure the rational behind this, since a struct may have more than two elements. I did a quick patch to omit the check and handle any number of fields. I have tested it and it seems to be working for me for the schema in the description. Given there were explicit checks for the filed count to be either 1 or 2, I am not sure if it is the right approach. Please take a look. Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError - Key: HIVE-7787 URL: https://issues.apache.org/jira/browse/HIVE-7787 Project: Hive Issue Type: Bug Components: Database/Schema, Thrift API Affects Versions: 0.12.0, 0.13.0, 0.12.1, 0.14.0, 0.13.1 Environment: Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0 Reporter: Raymond Lau Priority: Minor Attachments: HIVE-7787.trunk.1.patch When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Hive Table: {code} CREATE EXTERNAL TABLE mytable ( mystructs arraystructmyenumtype: string, field2: string, field3: string ) ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat' OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat' ; {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7787) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError
[ https://issues.apache.org/jira/browse/HIVE-7787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arup Malakar updated HIVE-7787: --- Fix Version/s: 0.14.0 Assignee: Arup Malakar Status: Patch Available (was: Open) Reading Parquet file with enum in Thrift Encoding throws NoSuchFieldError - Key: HIVE-7787 URL: https://issues.apache.org/jira/browse/HIVE-7787 Project: Hive Issue Type: Bug Components: Database/Schema, Thrift API Affects Versions: 0.13.1, 0.13.0, 0.12.0, 0.12.1, 0.14.0 Environment: Hive 0.12 CDH 5.1.0, Hadoop 2.3.0 CDH 5.1.0 Reporter: Raymond Lau Assignee: Arup Malakar Priority: Minor Fix For: 0.14.0 Attachments: HIVE-7787.trunk.1.patch When reading Parquet file, where the original Thrift schema contains a struct with an enum, this causes the following error (full stack trace blow): {code} java.lang.NoSuchFieldError: DECIMAL. {code} Example Thrift Schema: {code} enum MyEnumType { EnumOne, EnumTwo, EnumThree } struct MyStruct { 1: optional MyEnumType myEnumType; 2: optional string field2; 3: optional string field3; } struct outerStruct { 1: optional listMyStruct myStructs } {code} Hive Table: {code} CREATE EXTERNAL TABLE mytable ( mystructs arraystructmyenumtype: string, field2: string, field3: string ) ROW FORMAT SERDE 'parquet.hive.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'parquet.hive.DeprecatedParquetInputFormat' OUTPUTFORMAT 'parquet.hive.DeprecatedParquetOutputFormat' ; {code} Error Stack trace: {code} Java stack trace for Hive 0.12: Caused by: java.lang.NoSuchFieldError: DECIMAL at org.apache.hadoop.hive.ql.io.parquet.convert.ETypeConverter.getNewConverter(ETypeConverter.java:146) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:31) at org.apache.hadoop.hive.ql.io.parquet.convert.ArrayWritableGroupConverter.init(ArrayWritableGroupConverter.java:45) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:34) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:47) at org.apache.hadoop.hive.ql.io.parquet.convert.HiveGroupConverter.getConverterFromDescription(HiveGroupConverter.java:36) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:64) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableGroupConverter.init(DataWritableGroupConverter.java:40) at org.apache.hadoop.hive.ql.io.parquet.convert.DataWritableRecordConverter.init(DataWritableRecordConverter.java:32) at org.apache.hadoop.hive.ql.io.parquet.read.DataWritableReadSupport.prepareForRead(DataWritableReadSupport.java:128) at parquet.hadoop.InternalParquetRecordReader.initialize(InternalParquetRecordReader.java:142) at parquet.hadoop.ParquetRecordReader.initializeInternalReader(ParquetRecordReader.java:118) at parquet.hadoop.ParquetRecordReader.initialize(ParquetRecordReader.java:107) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:92) at org.apache.hadoop.hive.ql.io.parquet.read.ParquetRecordReaderWrapper.init(ParquetRecordReaderWrapper.java:66) at org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat.getRecordReader(MapredParquetInputFormat.java:51) at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.init(CombineHiveRecordReader.java:65) ... 16 more {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8225) CBO trunk merge: union11 test fails due to incorrect plan
[ https://issues.apache.org/jira/browse/HIVE-8225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157791#comment-14157791 ] Hive QA commented on HIVE-8225: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12672621/HIVE-8225.4.patch {color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 6541 tests executed *Failed tests:* {noformat} org.apache.hive.hcatalog.pig.TestHCatLoader.testColumnarStorePushdown[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testConvertBooleanToInt[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testGetInputBytes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testProjectionsBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadPartitionedBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadComplex[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testDynamicPartitioningMultiPartColsNoDataInDataNoSpec[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testEmptyStore[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testNoAlias[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testPartitionPublish[5] {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1096/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1096/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1096/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 14 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12672621 CBO trunk merge: union11 test fails due to incorrect plan - Key: HIVE-8225 URL: https://issues.apache.org/jira/browse/HIVE-8225 Project: Hive Issue Type: Bug Reporter: Sergey Shelukhin Assignee: Sergey Shelukhin Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8225.1.patch, HIVE-8225.2.patch, HIVE-8225.3.patch, HIVE-8225.4.patch, HIVE-8225.inprogress.patch, HIVE-8225.inprogress.patch, HIVE-8225.patch The result changes to as if the union didn't have count() inside. The issue can be fixed by using srcunion.value outside the subquery in count (replace count(1) with count(srcunion.value)). Otherwise, it looks like count(1) node from union-ed queries is not present in AST at all, which might cause this result. -Interestingly, adding group by to each query in a union produces completely weird result (count(1) is 309 for each key, whereas it should be 1 and the logical incorrect value if internal count is lost is 500)- Nm, that groups by table column called key, which is weird but is what Hive does -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8193) Hook HiveServer2 dynamic service discovery with session time out
[ https://issues.apache.org/jira/browse/HIVE-8193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157844#comment-14157844 ] Hive QA commented on HIVE-8193: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12672639/HIVE-8193.1.patch {color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 6541 tests executed *Failed tests:* {noformat} org.apache.hive.hcatalog.pig.TestHCatLoader.testColumnarStorePushdown[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testConvertBooleanToInt[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testGetInputBytes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testProjectionsBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadPartitionedBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadComplex[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testDynamicPartitioningMultiPartColsNoDataInDataNoSpec[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testEmptyStore[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testNoAlias[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testPartitionPublish[5] {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1097/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1097/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1097/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 14 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12672639 Hook HiveServer2 dynamic service discovery with session time out Key: HIVE-8193 URL: https://issues.apache.org/jira/browse/HIVE-8193 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8193.1.patch For dynamic service discovery, if the HiveServer2 instance is removed from ZooKeeper, currently, on the last client close, the server shuts down. However, we need to ensure that this also happens when a session is closed on timeout and no current sessions exit on this instance of HiveServer2. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8339) Job status not found after 100% succeded mapreduce
Valera Chevtaev created HIVE-8339: - Summary: Job status not found after 100% succeded mapreduce Key: HIVE-8339 URL: https://issues.apache.org/jira/browse/HIVE-8339 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Environment: Hadoop 2.4.0, Hive 0.13.1. Amazon EMR cluster of 9 i2.4xlarge nodes. 800+GB of data in HDFS. Reporter: Valera Chevtaev According to the logs it seems that the jobs 100% succeed for both map and reduce but then wasn't able to get the status of the job from job history server. Hive logs: 2014-10-03 07:57:26,593 INFO [main]: exec.Task (SessionState.java:printInfo(536)) - 2014-10-03 07:57:26,593 Stage-1 map = 100%, reduce = 99%, Cumulative CPU 872541.02 sec 2014-10-03 07:57:47,447 INFO [main]: exec.Task (SessionState.java:printInfo(536)) - 2014-10-03 07:57:47,446 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 872566.55 sec 2014-10-03 07:57:48,710 INFO [main]: mapred.ClientServiceDelegate (ClientServiceDelegate.java:getProxy(273)) - Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server 2014-10-03 07:57:48,716 ERROR [main]: exec.Task (SessionState.java:printError(545)) - Ended Job = job_1412263771568_0002 with exception 'java.io.IOException(Could not find status of job:job_1412263771568_0002)' java.io.IOException: Could not find status of job:job_1412263771568_0002 at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:294) at org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:547) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:136) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1503) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1270) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1088) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:911) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:901) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:275) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:227) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:430) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:366) at org.apache.hadoop.hive.cli.CliDriver.processReader(CliDriver.java:463) at org.apache.hadoop.hive.cli.CliDriver.processFile(CliDriver.java:479) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:759) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:697) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:636) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.main(RunJar.java:212) 2014-10-03 07:57:48,763 ERROR [main]: ql.Driver (SessionState.java:printError(545)) - FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-5865) AvroDeserializer incorrectly assumes keys to Maps will always be of type 'org.apache.avro.util.Utf8'
[ https://issues.apache.org/jira/browse/HIVE-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157885#comment-14157885 ] Hive QA commented on HIVE-5865: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12672645/HIVE-5865.2.patch {color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 6542 tests executed *Failed tests:* {noformat} org.apache.hive.hcatalog.pig.TestHCatLoader.testColumnarStorePushdown[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testConvertBooleanToInt[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testGetInputBytes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testProjectionsBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadPartitionedBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadComplex[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testDynamicPartitioningMultiPartColsNoDataInDataNoSpec[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testEmptyStore[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testNoAlias[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testPartitionPublish[5] {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1099/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1099/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1099/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 14 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12672645 AvroDeserializer incorrectly assumes keys to Maps will always be of type 'org.apache.avro.util.Utf8' Key: HIVE-5865 URL: https://issues.apache.org/jira/browse/HIVE-5865 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0 Reporter: Ben Roling Attachments: HIVE-5865-v2.patch, HIVE-5865.2.patch, HIVE-5865.2.patch, HIVE-5865.patch AvroDeserializer. deserializeMap() incorrectly assumes the type of they keys will always be 'org.apache.avro.util.Utf8'. If the reader schema defines avro.java.string=String, this assumption does not hold, resulting in a ClassCastException. I think a simple fix would be to define 'mapDatum' with type MapCharSequence,Object instead of MapUtf8,Object. Assuming the key has the more general type of 'CharSequence' avoids the need to make an assumption of either String or Utf8. I discovered the issue when using Hive 0.11.0. Looking at the tags it is also there is in 0.12.0 and trunk: https://github.com/apache/hive/blob/99f5bfcdf64330d062a30c0c9d83be1fbee54c34/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java#L313 The reason I saw this issue was because I pointed my Hive table to a schema file I populated based on pulling the schema from the SCHEMA$ attribute of an Avro generated Java class and I used stringType=String in the configuration of the avro-maven-plugin when generating my Java classes. If I alter the schema my Hive table points to such that it doesn't have the avro.java.string attribute on my map type objects then queries work fine but if I leave those in there I get the ClassCastException anytime I try to query the table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8310) RetryingHMSHandler is not used when kerberos auth enabled
[ https://issues.apache.org/jira/browse/HIVE-8310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14157950#comment-14157950 ] Hive QA commented on HIVE-8310: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12672666/HIVE-8310.1.patch {color:red}ERROR:{color} -1 due to 15 failed/errored test(s), 6540 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority org.apache.hive.hcatalog.pig.TestHCatLoader.testColumnarStorePushdown[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testConvertBooleanToInt[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testGetInputBytes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testProjectionsBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadPartitionedBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadComplex[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testDynamicPartitioningMultiPartColsNoDataInDataNoSpec[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testEmptyStore[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testNoAlias[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testPartitionPublish[5] {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1100/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1100/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1100/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 15 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12672666 RetryingHMSHandler is not used when kerberos auth enabled - Key: HIVE-8310 URL: https://issues.apache.org/jira/browse/HIVE-8310 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Thejas M Nair Assignee: Thejas M Nair Priority: Blocker Fix For: 0.14.0 Attachments: HIVE-8310.1.patch RetryingHMSHandler is not being used when kerberos auth enabled, after changes in HIVE-3255 . The changes in HIVE-4996 also removed the lower level retrying layer - RetryingRawStore. This means that in kerberos mode, retries are not done for database query failures. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-5536) Incorrect Operation Name is passed to hookcontext
[ https://issues.apache.org/jira/browse/HIVE-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158034#comment-14158034 ] Hive QA commented on HIVE-5536: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12672700/HIVE-5536.patch {color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 6541 tests executed *Failed tests:* {noformat} org.apache.hive.hcatalog.pig.TestHCatLoader.testColumnarStorePushdown[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testConvertBooleanToInt[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testGetInputBytes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testProjectionsBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadPartitionedBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadComplex[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testDynamicPartitioningMultiPartColsNoDataInDataNoSpec[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testEmptyStore[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testNoAlias[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testPartitionPublish[5] {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1102/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1102/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1102/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 14 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12672700 Incorrect Operation Name is passed to hookcontext - Key: HIVE-5536 URL: https://issues.apache.org/jira/browse/HIVE-5536 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.11.0, 0.12.0 Reporter: Shreepadma Venugopalan Assignee: Brock Noland Attachments: HIVE-5536.patch HS2 passes incorrect operation name to hookcontext. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-7789) Documentation for AccumuloStorageHandler
[ https://issues.apache.org/jira/browse/HIVE-7789?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158075#comment-14158075 ] Josh Elser commented on HIVE-7789: -- Thanks, [~leftylev]! That's great. Documentation for AccumuloStorageHandler Key: HIVE-7789 URL: https://issues.apache.org/jira/browse/HIVE-7789 Project: Hive Issue Type: Task Components: Documentation Reporter: Josh Elser Assignee: Josh Elser Fix For: 0.14.0 HIVE-7068 introduces an AccumuloStorageHandler. We need to add documentation on its usage. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-6994) parquet-hive createArray strips null elements
[ https://issues.apache.org/jira/browse/HIVE-6994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158087#comment-14158087 ] Mickael Lacour commented on HIVE-6994: -- I'm working on this one. We have 3 different data that can throw exception (for different reason) : * array with first value null (NullPointerException) : fixed * array with all empty fields (ParquetEncodingException empty fields are illegal, the field should be ommited completely instead) : Still in progress, talking with the parquet team about it * array with few empty fields (no exception, just missing data) : related to the previous one. Keep you posted parquet-hive createArray strips null elements - Key: HIVE-6994 URL: https://issues.apache.org/jira/browse/HIVE-6994 Project: Hive Issue Type: Bug Affects Versions: 0.13.0, 0.14.0 Reporter: Justin Coffey Assignee: Justin Coffey Fix For: 0.14.0 Attachments: HIVE-6994-1.patch, HIVE-6994.2.patch, HIVE-6994.3.patch, HIVE-6994.3.patch, HIVE-6994.patch The createArray method in ParquetHiveSerDe strips null values from resultant ArrayWritables. tracked here as well: https://github.com/Parquet/parquet-mr/issues/377 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-5865) AvroDeserializer incorrectly assumes keys to Maps will always be of type 'org.apache.avro.util.Utf8'
[ https://issues.apache.org/jira/browse/HIVE-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158095#comment-14158095 ] Ben Roling commented on HIVE-5865: -- Hey [~brocknoland] - the HCatLoader tests failed again but as I stated previously, those tests are failing without any of the changes from this JIRA. Is there anything more you want me to do on this? Are those test failures something someone else is already looking at? I would have to assume they are being seen in other builds as well. AvroDeserializer incorrectly assumes keys to Maps will always be of type 'org.apache.avro.util.Utf8' Key: HIVE-5865 URL: https://issues.apache.org/jira/browse/HIVE-5865 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0 Reporter: Ben Roling Attachments: HIVE-5865-v2.patch, HIVE-5865.2.patch, HIVE-5865.2.patch, HIVE-5865.patch AvroDeserializer. deserializeMap() incorrectly assumes the type of they keys will always be 'org.apache.avro.util.Utf8'. If the reader schema defines avro.java.string=String, this assumption does not hold, resulting in a ClassCastException. I think a simple fix would be to define 'mapDatum' with type MapCharSequence,Object instead of MapUtf8,Object. Assuming the key has the more general type of 'CharSequence' avoids the need to make an assumption of either String or Utf8. I discovered the issue when using Hive 0.11.0. Looking at the tags it is also there is in 0.12.0 and trunk: https://github.com/apache/hive/blob/99f5bfcdf64330d062a30c0c9d83be1fbee54c34/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java#L313 The reason I saw this issue was because I pointed my Hive table to a schema file I populated based on pulling the schema from the SCHEMA$ attribute of an Avro generated Java class and I used stringType=String in the configuration of the avro-maven-plugin when generating my Java classes. If I alter the schema my Hive table points to such that it doesn't have the avro.java.string attribute on my map type objects then queries work fine but if I leave those in there I get the ClassCastException anytime I try to query the table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8337) Change default of hive.warehouse.subdir.inherit.perms to true
[ https://issues.apache.org/jira/browse/HIVE-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158128#comment-14158128 ] Hive QA commented on HIVE-8337: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12672702/HIVE-8337.patch {color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 6541 tests executed *Failed tests:* {noformat} org.apache.hive.hcatalog.pig.TestHCatLoader.testColumnarStorePushdown[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testConvertBooleanToInt[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testGetInputBytes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testProjectionsBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadPartitionedBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadComplex[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testDynamicPartitioningMultiPartColsNoDataInDataNoSpec[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testEmptyStore[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testNoAlias[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testPartitionPublish[5] {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1103/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1103/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1103/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 14 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12672702 Change default of hive.warehouse.subdir.inherit.perms to true - Key: HIVE-8337 URL: https://issues.apache.org/jira/browse/HIVE-8337 Project: Hive Issue Type: Improvement Affects Versions: 0.14.0 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-8337.patch In my experience users want {{hive.warehouse.subdir.inherit.perms}} set to true since they want permissions to be inherited from the parent directory. Let's set the default value to true. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8330) HiveResultSet.findColumn() parameters are case sensitive
[ https://issues.apache.org/jira/browse/HIVE-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-8330: -- Attachment: HIVE-8330.3.patch Attach new patch with small changes on TestJdbcDriver2.java HiveResultSet.findColumn() parameters are case sensitive Key: HIVE-8330 URL: https://issues.apache.org/jira/browse/HIVE-8330 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Sergio Peña Assignee: Sergio Peña Attachments: HIVE-8330.1.patch, HIVE-8330.2.patch, HIVE-8330.3.patch Look at the following code: {noformat} Class.forName(org.apache.hive.jdbc.HiveDriver); Connection db = null; Statement stmt = null; ResultSet rs = null; try { db = DriverManager.getConnection(jdbc:hive2://localhost:1/default, hive, ); stmt = db.createStatement(); rs = stmt.executeQuery(SELECT * FROM sample_07 limit 1); ResultSetMetaData metaData = rs.getMetaData(); for (int i = 1; i = metaData.getColumnCount(); i++) { System.out.println(Column + i + : + metaData.getColumnName(i)); } while (rs.next()) { System.out.println(rs.findColumn(code)); } } finally { DbUtils.closeQuietly(db, stmt, rs); } {noformat} Above program will generate following result on my cluster: {noformat} Column 1: code Column 2: description Column 3: total_emp Column 4: salary 1 {noformat} However, if the last print sentence is changed as following (using uppercase characters): {noformat} System.out.println(rs.findColumn(Code)); {noformat} The program will fail at exactly that line. The same happens if the column name is changed as CODE Based on the JDBC ResultSet documentation, this method should be case insensitive. Column names used as input to getter methods are case insensitive http://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8330) HiveResultSet.findColumn() parameters are case sensitive
[ https://issues.apache.org/jira/browse/HIVE-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-8330: -- Status: Patch Available (was: Open) HiveResultSet.findColumn() parameters are case sensitive Key: HIVE-8330 URL: https://issues.apache.org/jira/browse/HIVE-8330 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Sergio Peña Assignee: Sergio Peña Attachments: HIVE-8330.1.patch, HIVE-8330.2.patch, HIVE-8330.3.patch Look at the following code: {noformat} Class.forName(org.apache.hive.jdbc.HiveDriver); Connection db = null; Statement stmt = null; ResultSet rs = null; try { db = DriverManager.getConnection(jdbc:hive2://localhost:1/default, hive, ); stmt = db.createStatement(); rs = stmt.executeQuery(SELECT * FROM sample_07 limit 1); ResultSetMetaData metaData = rs.getMetaData(); for (int i = 1; i = metaData.getColumnCount(); i++) { System.out.println(Column + i + : + metaData.getColumnName(i)); } while (rs.next()) { System.out.println(rs.findColumn(code)); } } finally { DbUtils.closeQuietly(db, stmt, rs); } {noformat} Above program will generate following result on my cluster: {noformat} Column 1: code Column 2: description Column 3: total_emp Column 4: salary 1 {noformat} However, if the last print sentence is changed as following (using uppercase characters): {noformat} System.out.println(rs.findColumn(Code)); {noformat} The program will fail at exactly that line. The same happens if the column name is changed as CODE Based on the JDBC ResultSet documentation, this method should be case insensitive. Column names used as input to getter methods are case insensitive http://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8330) HiveResultSet.findColumn() parameters are case sensitive
[ https://issues.apache.org/jira/browse/HIVE-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-8330: -- Status: Open (was: Patch Available) HiveResultSet.findColumn() parameters are case sensitive Key: HIVE-8330 URL: https://issues.apache.org/jira/browse/HIVE-8330 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Sergio Peña Assignee: Sergio Peña Attachments: HIVE-8330.1.patch, HIVE-8330.2.patch, HIVE-8330.3.patch Look at the following code: {noformat} Class.forName(org.apache.hive.jdbc.HiveDriver); Connection db = null; Statement stmt = null; ResultSet rs = null; try { db = DriverManager.getConnection(jdbc:hive2://localhost:1/default, hive, ); stmt = db.createStatement(); rs = stmt.executeQuery(SELECT * FROM sample_07 limit 1); ResultSetMetaData metaData = rs.getMetaData(); for (int i = 1; i = metaData.getColumnCount(); i++) { System.out.println(Column + i + : + metaData.getColumnName(i)); } while (rs.next()) { System.out.println(rs.findColumn(code)); } } finally { DbUtils.closeQuietly(db, stmt, rs); } {noformat} Above program will generate following result on my cluster: {noformat} Column 1: code Column 2: description Column 3: total_emp Column 4: salary 1 {noformat} However, if the last print sentence is changed as following (using uppercase characters): {noformat} System.out.println(rs.findColumn(Code)); {noformat} The program will fail at exactly that line. The same happens if the column name is changed as CODE Based on the JDBC ResultSet documentation, this method should be case insensitive. Column names used as input to getter methods are case insensitive http://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8330) HiveResultSet.findColumn() parameters are case sensitive
[ https://issues.apache.org/jira/browse/HIVE-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158171#comment-14158171 ] Brock Noland commented on HIVE-8330: This looks great! Just one minor issue and I am +1 pending tests. This does not log the stack trace: {noformat} String msg = Unexpected exception: + e; LOG.info(msg); {noformat} we should change it to: {noformat} String msg = Unexpected exception: + e; LOG.info(msg, e); {noformat} HiveResultSet.findColumn() parameters are case sensitive Key: HIVE-8330 URL: https://issues.apache.org/jira/browse/HIVE-8330 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Sergio Peña Assignee: Sergio Peña Attachments: HIVE-8330.1.patch, HIVE-8330.2.patch, HIVE-8330.3.patch Look at the following code: {noformat} Class.forName(org.apache.hive.jdbc.HiveDriver); Connection db = null; Statement stmt = null; ResultSet rs = null; try { db = DriverManager.getConnection(jdbc:hive2://localhost:1/default, hive, ); stmt = db.createStatement(); rs = stmt.executeQuery(SELECT * FROM sample_07 limit 1); ResultSetMetaData metaData = rs.getMetaData(); for (int i = 1; i = metaData.getColumnCount(); i++) { System.out.println(Column + i + : + metaData.getColumnName(i)); } while (rs.next()) { System.out.println(rs.findColumn(code)); } } finally { DbUtils.closeQuietly(db, stmt, rs); } {noformat} Above program will generate following result on my cluster: {noformat} Column 1: code Column 2: description Column 3: total_emp Column 4: salary 1 {noformat} However, if the last print sentence is changed as following (using uppercase characters): {noformat} System.out.println(rs.findColumn(Code)); {noformat} The program will fail at exactly that line. The same happens if the column name is changed as CODE Based on the JDBC ResultSet documentation, this method should be case insensitive. Column names used as input to getter methods are case insensitive http://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8330) HiveResultSet.findColumn() parameters are case sensitive
[ https://issues.apache.org/jira/browse/HIVE-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-8330: -- Status: Open (was: Patch Available) HiveResultSet.findColumn() parameters are case sensitive Key: HIVE-8330 URL: https://issues.apache.org/jira/browse/HIVE-8330 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Sergio Peña Assignee: Sergio Peña Attachments: HIVE-8330.1.patch, HIVE-8330.2.patch, HIVE-8330.3.patch Look at the following code: {noformat} Class.forName(org.apache.hive.jdbc.HiveDriver); Connection db = null; Statement stmt = null; ResultSet rs = null; try { db = DriverManager.getConnection(jdbc:hive2://localhost:1/default, hive, ); stmt = db.createStatement(); rs = stmt.executeQuery(SELECT * FROM sample_07 limit 1); ResultSetMetaData metaData = rs.getMetaData(); for (int i = 1; i = metaData.getColumnCount(); i++) { System.out.println(Column + i + : + metaData.getColumnName(i)); } while (rs.next()) { System.out.println(rs.findColumn(code)); } } finally { DbUtils.closeQuietly(db, stmt, rs); } {noformat} Above program will generate following result on my cluster: {noformat} Column 1: code Column 2: description Column 3: total_emp Column 4: salary 1 {noformat} However, if the last print sentence is changed as following (using uppercase characters): {noformat} System.out.println(rs.findColumn(Code)); {noformat} The program will fail at exactly that line. The same happens if the column name is changed as CODE Based on the JDBC ResultSet documentation, this method should be case insensitive. Column names used as input to getter methods are case insensitive http://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-5865) AvroDeserializer incorrectly assumes keys to Maps will always be of type 'org.apache.avro.util.Utf8'
[ https://issues.apache.org/jira/browse/HIVE-5865?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158173#comment-14158173 ] Brock Noland commented on HIVE-5865: Those tests are failing due to: https://issues.apache.org/jira/browse/HIVE-8271?focusedCommentId=14157581page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14157581 I will commit this today. Thanks!! AvroDeserializer incorrectly assumes keys to Maps will always be of type 'org.apache.avro.util.Utf8' Key: HIVE-5865 URL: https://issues.apache.org/jira/browse/HIVE-5865 Project: Hive Issue Type: Bug Affects Versions: 0.11.0, 0.12.0, 0.13.0 Reporter: Ben Roling Attachments: HIVE-5865-v2.patch, HIVE-5865.2.patch, HIVE-5865.2.patch, HIVE-5865.patch AvroDeserializer. deserializeMap() incorrectly assumes the type of they keys will always be 'org.apache.avro.util.Utf8'. If the reader schema defines avro.java.string=String, this assumption does not hold, resulting in a ClassCastException. I think a simple fix would be to define 'mapDatum' with type MapCharSequence,Object instead of MapUtf8,Object. Assuming the key has the more general type of 'CharSequence' avoids the need to make an assumption of either String or Utf8. I discovered the issue when using Hive 0.11.0. Looking at the tags it is also there is in 0.12.0 and trunk: https://github.com/apache/hive/blob/99f5bfcdf64330d062a30c0c9d83be1fbee54c34/serde/src/java/org/apache/hadoop/hive/serde2/avro/AvroDeserializer.java#L313 The reason I saw this issue was because I pointed my Hive table to a schema file I populated based on pulling the schema from the SCHEMA$ attribute of an Avro generated Java class and I used stringType=String in the configuration of the avro-maven-plugin when generating my Java classes. If I alter the schema my Hive table points to such that it doesn't have the avro.java.string attribute on my map type objects then queries work fine but if I leave those in there I get the ClassCastException anytime I try to query the table. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8330) HiveResultSet.findColumn() parameters are case sensitive
[ https://issues.apache.org/jira/browse/HIVE-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-8330: -- Attachment: HIVE-8330.4.patch HiveResultSet.findColumn() parameters are case sensitive Key: HIVE-8330 URL: https://issues.apache.org/jira/browse/HIVE-8330 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Sergio Peña Assignee: Sergio Peña Attachments: HIVE-8330.1.patch, HIVE-8330.2.patch, HIVE-8330.3.patch, HIVE-8330.4.patch Look at the following code: {noformat} Class.forName(org.apache.hive.jdbc.HiveDriver); Connection db = null; Statement stmt = null; ResultSet rs = null; try { db = DriverManager.getConnection(jdbc:hive2://localhost:1/default, hive, ); stmt = db.createStatement(); rs = stmt.executeQuery(SELECT * FROM sample_07 limit 1); ResultSetMetaData metaData = rs.getMetaData(); for (int i = 1; i = metaData.getColumnCount(); i++) { System.out.println(Column + i + : + metaData.getColumnName(i)); } while (rs.next()) { System.out.println(rs.findColumn(code)); } } finally { DbUtils.closeQuietly(db, stmt, rs); } {noformat} Above program will generate following result on my cluster: {noformat} Column 1: code Column 2: description Column 3: total_emp Column 4: salary 1 {noformat} However, if the last print sentence is changed as following (using uppercase characters): {noformat} System.out.println(rs.findColumn(Code)); {noformat} The program will fail at exactly that line. The same happens if the column name is changed as CODE Based on the JDBC ResultSet documentation, this method should be case insensitive. Column names used as input to getter methods are case insensitive http://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8330) HiveResultSet.findColumn() parameters are case sensitive
[ https://issues.apache.org/jira/browse/HIVE-8330?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña updated HIVE-8330: -- Status: Patch Available (was: Open) HiveResultSet.findColumn() parameters are case sensitive Key: HIVE-8330 URL: https://issues.apache.org/jira/browse/HIVE-8330 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Sergio Peña Assignee: Sergio Peña Attachments: HIVE-8330.1.patch, HIVE-8330.2.patch, HIVE-8330.3.patch, HIVE-8330.4.patch Look at the following code: {noformat} Class.forName(org.apache.hive.jdbc.HiveDriver); Connection db = null; Statement stmt = null; ResultSet rs = null; try { db = DriverManager.getConnection(jdbc:hive2://localhost:1/default, hive, ); stmt = db.createStatement(); rs = stmt.executeQuery(SELECT * FROM sample_07 limit 1); ResultSetMetaData metaData = rs.getMetaData(); for (int i = 1; i = metaData.getColumnCount(); i++) { System.out.println(Column + i + : + metaData.getColumnName(i)); } while (rs.next()) { System.out.println(rs.findColumn(code)); } } finally { DbUtils.closeQuietly(db, stmt, rs); } {noformat} Above program will generate following result on my cluster: {noformat} Column 1: code Column 2: description Column 3: total_emp Column 4: salary 1 {noformat} However, if the last print sentence is changed as following (using uppercase characters): {noformat} System.out.println(rs.findColumn(Code)); {noformat} The program will fail at exactly that line. The same happens if the column name is changed as CODE Based on the JDBC ResultSet documentation, this method should be case insensitive. Column names used as input to getter methods are case insensitive http://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7954) Investigate query failures (3)
[ https://issues.apache.org/jira/browse/HIVE-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-7954: --- Environment: (was: I ran all q-file tests and the following failed with an exception: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/HIVE-SPARK-ALL-TESTS-Build/lastCompletedBuild/testReport/ we don't necessary want to run all these tests as part of the spark tests, but we should understand why they failed with an exception. This JIRA is to look into these failures and document them with one of: * New JIRA * Covered under existing JIRA * More investigation required Tests: {noformat} org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_root_dir_external_table 0.28 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_view 12 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_complex_types 1.5 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_multi_insert_common_distinct 3.9 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_admin_almighty2 2.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_quotedid_smb 3.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_input201.5 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_dbtxnmgr_showlocks 0.23 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucketsortoptimize_insert_5 9.9 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_schemeAuthority 0.54 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket51.9 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_load_fs2 0.83 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_lock4 4.3 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_exim_14_managed_location_over_existing 1 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_udf_in_file 0.73 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_lock1 0.92 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_mi 1.9 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_nullformatdir 1 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_exim_13_managed_location 3.4 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_import_exported_table 2.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_correlationoptimizer8 10 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_create_macro1 2.5 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_stats4 2.5 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_exim_11_managed_external 0.99 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_complex_types_multi_single_reducer 8.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_nullgroup5 1.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_5 9.9 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_lock3 4.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_ppd_union_view 4.1 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_sample10 2.5 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_rename_external_partition_location 2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_remote_script 0.35 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_exim_12_external_location 1 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_load_dyn_part1 6.4 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_insert 3.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_newline4.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_file_with_header_footer 2.7 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_17 10 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_table_access_keys_stats 6.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_multi_insert_lateral_view {noformat}) Investigate query failures (3) -- Key: HIVE-7954 URL: https://issues.apache.org/jira/browse/HIVE-7954 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7954) Investigate query failures (3)
[ https://issues.apache.org/jira/browse/HIVE-7954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-7954: --- Description: I ran all q-file tests and the following failed with an exception: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/HIVE-SPARK-ALL-TESTS-Build/lastCompletedBuild/testReport/ we don't necessary want to run all these tests as part of the spark tests, but we should understand why they failed with an exception. This JIRA is to look into these failures and document them with one of: * New JIRA * Covered under existing JIRA * More investigation required Tests: {noformat} org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_root_dir_external_table 0.28 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_union_view 12 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_complex_types 1.5 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_multi_insert_common_distinct 3.9 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_admin_almighty2 2.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_quotedid_smb 3.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_input201.5 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_dbtxnmgr_showlocks 0.23 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucketsortoptimize_insert_5 9.9 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_schemeAuthority 0.54 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket51.9 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_load_fs2 0.83 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_lock4 4.3 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_exim_14_managed_location_over_existing 1 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_udf_in_file 0.73 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_lock1 0.92 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_mi 1.9 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_nullformatdir 1 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_exim_13_managed_location 3.4 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_import_exported_table 2.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_correlationoptimizer8 10 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_create_macro1 2.5 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_stats4 2.5 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_exim_11_managed_external 0.99 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_complex_types_multi_single_reducer 8.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_nullgroup5 1.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_5 9.9 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_lock3 4.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_ppd_union_view 4.1 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_sample10 2.5 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_rename_external_partition_location 2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_remote_script 0.35 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_exim_12_external_location 1 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_load_dyn_part1 6.4 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_insert 3.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_newline4.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_file_with_header_footer 2.7 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_17 10 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_table_access_keys_stats 6.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_multi_insert_lateral_view {noformat} Investigate query failures (3) -- Key: HIVE-7954 URL: https://issues.apache.org/jira/browse/HIVE-7954 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland I ran all q-file tests and the following failed with an exception:
[jira] [Updated] (HIVE-7955) Investigate query failures (4)
[ https://issues.apache.org/jira/browse/HIVE-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-7955: --- Description: I ran all q-file tests and the following failed with an exception: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/HIVE-SPARK-ALL-TESTS-Build/lastCompletedBuild/testReport/ we don't necessary want to run all these tests as part of the spark tests, but we should understand why they failed with an exception. This JIRA is to look into these failures and document them with one of: * New JIRA * Covered under existing JIRA * More investigation required Tests: {noformat} org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_dynpart_sort_optimization 12 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_schemeAuthority2 0.23 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_load_dyn_part8 10 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_4 11 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_orc_analyze 8 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_tez_join_hash 0.98 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_hook_context_cs 2.1 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_insert_overwrite_local_directory_1 3.7 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_archive_excludeHadoop20 27 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_9 8.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_limit_partition_metadataonly 0.77 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket_num_reducers2 7 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_bigdata 0.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucketsortoptimize_insert_6 6.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_25 2.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_dbtxnmgr_query3 0.48 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_16 8.5 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_empty_dir_in_table 2.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_input331.3 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_admin_almighty1 2.8 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_udf_context_aware 0.23 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_view_sqlstd 4.1 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_12 {noformat} Investigate query failures (4) -- Key: HIVE-7955 URL: https://issues.apache.org/jira/browse/HIVE-7955 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland I ran all q-file tests and the following failed with an exception: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/HIVE-SPARK-ALL-TESTS-Build/lastCompletedBuild/testReport/ we don't necessary want to run all these tests as part of the spark tests, but we should understand why they failed with an exception. This JIRA is to look into these failures and document them with one of: * New JIRA * Covered under existing JIRA * More investigation required Tests: {noformat} org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_dynpart_sort_optimization 12 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_schemeAuthority2 0.23 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_load_dyn_part8 10 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_4 11 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_orc_analyze 8 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_tez_join_hash 0.98 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_hook_context_cs 2.1 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_insert_overwrite_local_directory_1 3.7 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_archive_excludeHadoop20 27 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_9 8.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_limit_partition_metadataonly 0.77 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket_num_reducers2 7 sec 2
[jira] [Updated] (HIVE-7955) Investigate query failures (4)
[ https://issues.apache.org/jira/browse/HIVE-7955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-7955: --- Environment: (was: I ran all q-file tests and the following failed with an exception: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/HIVE-SPARK-ALL-TESTS-Build/lastCompletedBuild/testReport/ we don't necessary want to run all these tests as part of the spark tests, but we should understand why they failed with an exception. This JIRA is to look into these failures and document them with one of: * New JIRA * Covered under existing JIRA * More investigation required Tests: {noformat} org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_dynpart_sort_optimization 12 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_schemeAuthority2 0.23 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_load_dyn_part8 10 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_4 11 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_orc_analyze 8 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_tez_join_hash 0.98 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_hook_context_cs 2.1 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_insert_overwrite_local_directory_1 3.7 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_archive_excludeHadoop20 27 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_9 8.2 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_limit_partition_metadataonly 0.77 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucket_num_reducers2 7 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_groupby_bigdata 0.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_bucketsortoptimize_insert_6 6.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_25 2.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_dbtxnmgr_query3 0.48 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_auto_sortmerge_join_16 8.5 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_empty_dir_in_table 2.6 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_input331.3 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_admin_almighty1 2.8 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_udf_context_aware 0.23 sec2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_authorization_view_sqlstd 4.1 sec 2 org.apache.hadoop.hive.cli.TestSparkCliDriver.testCliDriver_smb_mapjoin_12 {noformat}) Investigate query failures (4) -- Key: HIVE-7955 URL: https://issues.apache.org/jira/browse/HIVE-7955 Project: Hive Issue Type: Sub-task Components: Spark Reporter: Brock Noland -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 26299: HIVE-5536 - Incorrect Operation Name is passed to hookcontext
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26299/ --- (Updated Oct. 3, 2014, 5:24 p.m.) Review request for hive and Mohit Sabharwal. Changes --- Updated based on feedback. Repository: hive-git Description --- Add's operation name to Query Plan. Also tests to ensure it's correct. I updated the test a little as well since it was quite hard to debug a failure. Diffs (updated) - itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestHs2HooksWithMiniKdc.java 99026b0 itests/hive-unit/src/test/java/org/apache/hadoop/hive/hooks/TestHs2Hooks.java 49b9994 ql/src/java/org/apache/hadoop/hive/ql/Driver.java 5b36f71 ql/src/java/org/apache/hadoop/hive/ql/QueryPlan.java 85d599a ql/src/java/org/apache/hadoop/hive/ql/hooks/HookContext.java 49c095a ql/src/test/org/apache/hadoop/hive/ql/parse/TestUpdateDeleteSemanticAnalyzer.java 01e3635 Diff: https://reviews.apache.org/r/26299/diff/ Testing --- Thanks, Brock Noland
[jira] [Resolved] (HIVE-8271) Jackson incompatibility between hadoop-2.4 and hive-14
[ https://issues.apache.org/jira/browse/HIVE-8271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gopal V resolved HIVE-8271. --- Resolution: Won't Fix Hadoop Flags: Incompatible change Reverted on trunk hive-14. There is no way to fix for this problem as it exists today, because Jackson is used in public API signatures within hive-exec.jar. Jackson incompatibility between hadoop-2.4 and hive-14 -- Key: HIVE-8271 URL: https://issues.apache.org/jira/browse/HIVE-8271 Project: Hive Issue Type: Bug Components: UDF Affects Versions: 0.14.0 Reporter: Gopal V Assignee: Gopal V Priority: Blocker Fix For: 0.14.0 Attachments: HIVE-8271.1.patch jackson-1.8 is not API compatible with jackson-1.9 (abstract classes). {code} threw an Error. Shutting down now... java.lang.AbstractMethodError: org.codehaus.jackson.map.AnnotationIntrospector.findSerializer(Lorg/codehaus/jackson/map/introspect/Annotated;)Ljava/lang/Object; {code} hadoop-common (2.4) depends on jackson-1.8 and hive-14 depends on jackson-1.9. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-7957) Revisit event version handling in dynamic partition pruning on Tez
[ https://issues.apache.org/jira/browse/HIVE-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158220#comment-14158220 ] Hive QA commented on HIVE-7957: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12672718/HIVE-7957.3.patch {color:red}ERROR:{color} -1 due to 14 failed/errored test(s), 6541 tests executed *Failed tests:* {noformat} org.apache.hive.hcatalog.pig.TestHCatLoader.testColumnarStorePushdown[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testConvertBooleanToInt[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testGetInputBytes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testProjectionsBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadPartitionedBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadComplex[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testDynamicPartitioningMultiPartColsNoDataInDataNoSpec[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testEmptyStore[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testNoAlias[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testPartitionPublish[5] {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1104/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1104/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1104/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 14 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12672718 Revisit event version handling in dynamic partition pruning on Tez -- Key: HIVE-7957 URL: https://issues.apache.org/jira/browse/HIVE-7957 Project: Hive Issue Type: Bug Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Priority: Critical Fix For: 0.14.0 Attachments: HIVE-7957.1.patch, HIVE-7957.2.patch, HIVE-7957.3.patch Once TEZ-1447 is resolved, we should be able to simplify the handing of event versions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8335) TestHCatLoader/TestHCatStorer failures on pre-commit tests
[ https://issues.apache.org/jira/browse/HIVE-8335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158228#comment-14158228 ] Gopal V commented on HIVE-8335: --- HIVE-8271 reverted. Sorry about that - the timing between the two commits makes me worry whether I'm checking in anything else with conflicts. TestHCatLoader/TestHCatStorer failures on pre-commit tests -- Key: HIVE-8335 URL: https://issues.apache.org/jira/browse/HIVE-8335 Project: Hive Issue Type: Bug Components: HCatalog, Tests Reporter: Jason Dere Looks like a number of Hive pre-commit tests have been failing with the following failures: {noformat} org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testConvertBooleanToInt[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadComplex[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testColumnarStorePushdown[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testGetInputBytes[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testNoAlias[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testEmptyStore[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testDynamicPartitioningMultiPartColsNoDataInDataNoSpec[5] org.apache.hive.hcatalog.pig.TestHCatStorer.testPartitionPublish[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testSchemaLoadPrimitiveTypes[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadDataBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testReadPartitionedBasic[5] org.apache.hive.hcatalog.pig.TestHCatLoader.testProjectionsBasic[5] {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8322) VectorReduceSinkOperator: ClassCastException: ~StandardUnionObjectInspector$StandardUnion cannot be cast to ~IntWritable
[ https://issues.apache.org/jira/browse/HIVE-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158232#comment-14158232 ] Vikram Dixit K commented on HIVE-8322: -- +1 for 0.14 VectorReduceSinkOperator: ClassCastException: ~StandardUnionObjectInspector$StandardUnion cannot be cast to ~IntWritable Key: HIVE-8322 URL: https://issues.apache.org/jira/browse/HIVE-8322 Project: Hive Issue Type: Bug Components: Tez, Vectorization Affects Versions: 0.14.0 Reporter: Matt McCline Assignee: Matt McCline Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8322.01.patch, HIVE-8322.02.patch, HIVE-8322.03.patch, HIVE-8322.04.patch Some queries with count(distinct(..)) fail now in VectorReduceSinkOperator. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-5536) Incorrect Operation Name is passed to hookcontext
[ https://issues.apache.org/jira/browse/HIVE-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-5536: --- Attachment: HIVE-5536.2.patch Incorrect Operation Name is passed to hookcontext - Key: HIVE-5536 URL: https://issues.apache.org/jira/browse/HIVE-5536 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.11.0, 0.12.0 Reporter: Shreepadma Venugopalan Assignee: Brock Noland Attachments: HIVE-5536.2.patch, HIVE-5536.patch HS2 passes incorrect operation name to hookcontext. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8337) Change default of hive.warehouse.subdir.inherit.perms to true
[ https://issues.apache.org/jira/browse/HIVE-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-8337: --- Attachment: HIVE-8337.2.patch Thank you [~leftylev]!! I have updated the patch. Your feedback is always appreciated! Change default of hive.warehouse.subdir.inherit.perms to true - Key: HIVE-8337 URL: https://issues.apache.org/jira/browse/HIVE-8337 Project: Hive Issue Type: Improvement Affects Versions: 0.14.0 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-8337.2.patch, HIVE-8337.patch In my experience users want {{hive.warehouse.subdir.inherit.perms}} set to true since they want permissions to be inherited from the parent directory. Let's set the default value to true. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8340) HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
Xiaobing Zhou created HIVE-8340: --- Summary: HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start. Key: HIVE-8340 URL: https://issues.apache.org/jira/browse/HIVE-8340 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Windows Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 26299: HIVE-5536 - Incorrect Operation Name is passed to hookcontext
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26299/#review55366 --- Ship it! Ship It! - Mohit Sabharwal On Oct. 3, 2014, 5:24 p.m., Brock Noland wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26299/ --- (Updated Oct. 3, 2014, 5:24 p.m.) Review request for hive and Mohit Sabharwal. Repository: hive-git Description --- Add's operation name to Query Plan. Also tests to ensure it's correct. I updated the test a little as well since it was quite hard to debug a failure. Diffs - itests/hive-minikdc/src/test/java/org/apache/hive/minikdc/TestHs2HooksWithMiniKdc.java 99026b0 itests/hive-unit/src/test/java/org/apache/hadoop/hive/hooks/TestHs2Hooks.java 49b9994 ql/src/java/org/apache/hadoop/hive/ql/Driver.java 5b36f71 ql/src/java/org/apache/hadoop/hive/ql/QueryPlan.java 85d599a ql/src/java/org/apache/hadoop/hive/ql/hooks/HookContext.java 49c095a ql/src/test/org/apache/hadoop/hive/ql/parse/TestUpdateDeleteSemanticAnalyzer.java 01e3635 Diff: https://reviews.apache.org/r/26299/diff/ Testing --- Thanks, Brock Noland
[jira] [Commented] (HIVE-7733) Ambiguous column reference error on query
[ https://issues.apache.org/jira/browse/HIVE-7733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158250#comment-14158250 ] Ashutosh Chauhan commented on HIVE-7733: I agree. ambiguous_col.q should fail under uniqueness assumption, which I think is a valid assumption to have. I also tested queries listed in that test case on mysql and they failed with error ambiguous column Also, given that changes introduced in HIVE-2723 where this test case was added was not consistent with Hive itself (HIVE-3882), I think we should not allow such ambiguity in queries. Lets move ambiguous_col.q test case to negative tests. [~navis] Would you like to rebase this patch. Lets get this one in. Ambiguous column reference error on query - Key: HIVE-7733 URL: https://issues.apache.org/jira/browse/HIVE-7733 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Jason Dere Assignee: Navis Attachments: HIVE-7733.1.patch.txt, HIVE-7733.2.patch.txt, HIVE-7733.3.patch.txt, HIVE-7733.4.patch.txt {noformat} CREATE TABLE agg1 ( col0 INT, col1 STRING, col2 DOUBLE ); explain SELECT single_use_subq11.a1 AS a1, single_use_subq11.a2 AS a2 FROM (SELECT Sum(agg1.col2) AS a1 FROM agg1 GROUP BY agg1.col0) single_use_subq12 JOIN (SELECT alias.a2 AS a0, alias.a1 AS a1, alias.a1 AS a2 FROM (SELECT agg1.col1 AS a0, '42' AS a1, agg1.col0 AS a2 FROM agg1 UNION ALL SELECT agg1.col1 AS a0, '41' AS a1, agg1.col0 AS a2 FROM agg1) alias GROUP BY alias.a2, alias.a1) single_use_subq11 ON ( single_use_subq11.a0 = single_use_subq11.a0 ); {noformat} Gets the following error: FAILED: SemanticException [Error 10007]: Ambiguous column reference a2 Looks like this query had been working in 0.12 but starting failing with this error in 0.13 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8322) VectorReduceSinkOperator: ClassCastException: ~StandardUnionObjectInspector$StandardUnion cannot be cast to ~IntWritable
[ https://issues.apache.org/jira/browse/HIVE-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158256#comment-14158256 ] Prasanth J commented on HIVE-8322: -- Committed to branch 0.14. Thanks [~vikram.dixit] and [~mmccline]. VectorReduceSinkOperator: ClassCastException: ~StandardUnionObjectInspector$StandardUnion cannot be cast to ~IntWritable Key: HIVE-8322 URL: https://issues.apache.org/jira/browse/HIVE-8322 Project: Hive Issue Type: Bug Components: Tez, Vectorization Affects Versions: 0.14.0 Reporter: Matt McCline Assignee: Matt McCline Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8322.01.patch, HIVE-8322.02.patch, HIVE-8322.03.patch, HIVE-8322.04.patch Some queries with count(distinct(..)) fail now in VectorReduceSinkOperator. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8322) VectorReduceSinkOperator: ClassCastException: ~StandardUnionObjectInspector$StandardUnion cannot be cast to ~IntWritable
[ https://issues.apache.org/jira/browse/HIVE-8322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Prasanth J updated HIVE-8322: - Resolution: Fixed Status: Resolved (was: Patch Available) VectorReduceSinkOperator: ClassCastException: ~StandardUnionObjectInspector$StandardUnion cannot be cast to ~IntWritable Key: HIVE-8322 URL: https://issues.apache.org/jira/browse/HIVE-8322 Project: Hive Issue Type: Bug Components: Tez, Vectorization Affects Versions: 0.14.0 Reporter: Matt McCline Assignee: Matt McCline Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8322.01.patch, HIVE-8322.02.patch, HIVE-8322.03.patch, HIVE-8322.04.patch Some queries with count(distinct(..)) fail now in VectorReduceSinkOperator. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7957) Revisit event version handling in dynamic partition pruning on Tez
[ https://issues.apache.org/jira/browse/HIVE-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-7957: - Resolution: Fixed Status: Resolved (was: Patch Available) Committed to trunk and hive 0.14 branch. Revisit event version handling in dynamic partition pruning on Tez -- Key: HIVE-7957 URL: https://issues.apache.org/jira/browse/HIVE-7957 Project: Hive Issue Type: Bug Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Priority: Critical Fix For: 0.14.0 Attachments: HIVE-7957.1.patch, HIVE-7957.2.patch, HIVE-7957.3.patch Once TEZ-1447 is resolved, we should be able to simplify the handing of event versions. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8114) Type resolution for udf arguments of Decimal Type results in error
[ https://issues.apache.org/jira/browse/HIVE-8114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jason Dere updated HIVE-8114: - Release Note: Removes mixed double/decimal versions of log(). log() will be resolved to one of the following versions, using type conversion where necessary: double log(double base, double a) double log(decimal base, decimal a) Type resolution for udf arguments of Decimal Type results in error -- Key: HIVE-8114 URL: https://issues.apache.org/jira/browse/HIVE-8114 Project: Hive Issue Type: Bug Components: Query Processor, Types Affects Versions: 0.13.0, 0.13.1 Reporter: Ashutosh Chauhan Assignee: Jason Dere Labels: TODOC14 Fix For: 0.14.0 Attachments: HIVE-8114.1.patch {code} select log (2, 10.5BD) from src; {code} results in exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8331) HIVE-8303 followup, investigate result diff [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chao updated HIVE-8331: --- Attachment: HIVE-8331.2-spark.patch I cannot find a good and clean solution for this. This patch requires a little bit change on the Tez end. HIVE-8303 followup, investigate result diff [Spark Branch] -- Key: HIVE-8331 URL: https://issues.apache.org/jira/browse/HIVE-8331 Project: Hive Issue Type: Task Components: Spark Reporter: Xuefu Zhang Assignee: Chao Attachments: HIVE-8331.1-spark.patch, HIVE-8331.2-spark.patch HIVE-8303 patch introduced some result diffs for some spark tests. We need to investigate those, including parallel_join0.q, union22.q, vectorized_shufflejoin.q, union_remove_18.q, and maybe more. Also the investigation includes the test failures related to spark. Specifically, union_remove_18.q demonstrated random order. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8340) HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
[ https://issues.apache.org/jira/browse/HIVE-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158281#comment-14158281 ] Xiaobing Zhou commented on HIVE-8340: - Here are comments years ago to explain why the grand-children process is not killed(http://bugs.java.com/bugdatabase/view_bug.do?bug_id=4770092), but I guess there should be some fixes, I will dig more. The following excellent SDN comment explains what's going on: - The fundamental problem here is that, unlike Unix, Windows does that maintain parent-child relationships between processes. A process can kill its own immediate children, but unless you make other arrangements to obtain the information, can't kill any 'grand-children' because it has no way of finding them. Ctrl-C types at a command prompt is just a character that the command processor interprets and not a signal sent from outside. When you 'destroy' a child command script, that process does not get the opportunity to terminate any child processes it may know about. Recent versions of WIndows (2000 or later) do provide a Job concept which acts as a container for processes. Killing a Job does terminate all processes associated with that job. However Jobs do not contain other jobs, so fully emulating the Unix behaviour is probably impossible. - Note that Unix emulation environments on Windows, like Cygwin, suffer from the same problem. Any fix would be difficult. Even if we could figure out how to fix this, we might choose not to do so for the usual reason – compatibility. 2005-09-27 HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start. - Key: HIVE-8340 URL: https://issues.apache.org/jira/browse/HIVE-8340 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Windows Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8340) HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
[ https://issues.apache.org/jira/browse/HIVE-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158286#comment-14158286 ] Xiaobing Zhou commented on HIVE-8340: - Removed cmd.exe process as a wrapper of JVM process. This fixed the issue. HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start. - Key: HIVE-8340 URL: https://issues.apache.org/jira/browse/HIVE-8340 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Windows Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8340) HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
[ https://issues.apache.org/jira/browse/HIVE-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HIVE-8340: Attachment: HIVE-8340.1.patch Made a patch. Can anyone in the watcher list review it? Thanks! HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start. - Key: HIVE-8340 URL: https://issues.apache.org/jira/browse/HIVE-8340 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Windows Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Attachments: HIVE-8340.1.patch On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Review Request 26321: HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26321/ --- Review request for hive. Repository: hive-git Description --- On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. Diffs - bin/ext/hiveserver2.cmd a5f3bb5 bin/hive.cmd c2e9853 common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 3fe67b2 ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java 79da5a0 Diff: https://reviews.apache.org/r/26321/diff/ Testing --- Thanks, XIAOBING ZHOU
[jira] [Updated] (HIVE-8340) HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
[ https://issues.apache.org/jira/browse/HIVE-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-8340: --- Fix Version/s: 0.14.0 HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start. - Key: HIVE-8340 URL: https://issues.apache.org/jira/browse/HIVE-8340 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Windows Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Fix For: 0.14.0 Attachments: HIVE-8340.1.patch On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8340) HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
[ https://issues.apache.org/jira/browse/HIVE-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-8340: --- Priority: Critical (was: Major) HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start. - Key: HIVE-8340 URL: https://issues.apache.org/jira/browse/HIVE-8340 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Windows Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8340.1.patch On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8340) HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
[ https://issues.apache.org/jira/browse/HIVE-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158322#comment-14158322 ] Vaibhav Gumashta commented on HIVE-8340: Making this critical since this is required for windows. HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start. - Key: HIVE-8340 URL: https://issues.apache.org/jira/browse/HIVE-8340 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Windows Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8340.1.patch On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8324) Shim KerberosName (causes build failure on hadoop-1)
[ https://issues.apache.org/jira/browse/HIVE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-8324: --- Attachment: HIVE-8324.2.patch Patch with shimming done post HIVE-6799. Shim KerberosName (causes build failure on hadoop-1) Key: HIVE-8324 URL: https://issues.apache.org/jira/browse/HIVE-8324 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Szehon Ho Assignee: Vaibhav Gumashta Priority: Blocker Fix For: 0.14.0 Attachments: HIVE-8324.1.patch, HIVE-8324.2.patch Unfortunately even after HIVE-8265, there are still more compile failures. {code} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hive-service: Compilation failure: Compilation failure: [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[35,54] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: package org.apache.hadoop.security.authentication.util [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[241,7] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[241,43] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[252,7] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[252,43] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 26277: Shim KerberosName (causes build failure on hadoop-1)
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26277/ --- (Updated Oct. 3, 2014, 6:39 p.m.) Review request for hive, Szehon Ho and Thejas Nair. Bugs: HIVE-8324 https://issues.apache.org/jira/browse/HIVE-8324 Repository: hive-git Description --- https://issues.apache.org/jira/browse/HIVE-8324 Diffs (updated) - service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java 83dd2e6 service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java 312d05e shims/0.20/src/main/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java a353a46 shims/0.20S/src/main/java/org/apache/hadoop/hive/shims/Hadoop20SShims.java 030cb75 shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java 0731108 shims/common/src/main/java/org/apache/hadoop/hive/shims/HadoopShims.java 4fcaa1e Diff: https://reviews.apache.org/r/26277/diff/ Testing --- Thanks, Vaibhav Gumashta
Re: Review Request 26277: Shim KerberosName (causes build failure on hadoop-1)
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26277/ --- (Updated Oct. 3, 2014, 6:39 p.m.) Review request for hive, dilli dorai, Szehon Ho, and Thejas Nair. Bugs: HIVE-8324 https://issues.apache.org/jira/browse/HIVE-8324 Repository: hive-git Description --- https://issues.apache.org/jira/browse/HIVE-8324 Diffs - service/src/java/org/apache/hive/service/auth/HiveAuthFactory.java 83dd2e6 service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java 312d05e shims/0.20/src/main/java/org/apache/hadoop/hive/shims/Hadoop20Shims.java a353a46 shims/0.20S/src/main/java/org/apache/hadoop/hive/shims/Hadoop20SShims.java 030cb75 shims/0.23/src/main/java/org/apache/hadoop/hive/shims/Hadoop23Shims.java 0731108 shims/common/src/main/java/org/apache/hadoop/hive/shims/HadoopShims.java 4fcaa1e Diff: https://reviews.apache.org/r/26277/diff/ Testing --- Thanks, Vaibhav Gumashta
[jira] [Commented] (HIVE-8324) Shim KerberosName (causes build failure on hadoop-1)
[ https://issues.apache.org/jira/browse/HIVE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158332#comment-14158332 ] Vaibhav Gumashta commented on HIVE-8324: [~szehon] I've updated the rb with the new patch. No major changes - if your +1 stands, I'll commit this on precommit pass. Shim KerberosName (causes build failure on hadoop-1) Key: HIVE-8324 URL: https://issues.apache.org/jira/browse/HIVE-8324 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Szehon Ho Assignee: Vaibhav Gumashta Priority: Blocker Fix For: 0.14.0 Attachments: HIVE-8324.1.patch, HIVE-8324.2.patch Unfortunately even after HIVE-8265, there are still more compile failures. {code} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hive-service: Compilation failure: Compilation failure: [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[35,54] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: package org.apache.hadoop.security.authentication.util [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[241,7] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[241,43] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[252,7] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[252,43] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8324) Shim KerberosName (causes build failure on hadoop-1)
[ https://issues.apache.org/jira/browse/HIVE-8324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158336#comment-14158336 ] Szehon Ho commented on HIVE-8324: - Yep, I took a look, +1 Shim KerberosName (causes build failure on hadoop-1) Key: HIVE-8324 URL: https://issues.apache.org/jira/browse/HIVE-8324 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Szehon Ho Assignee: Vaibhav Gumashta Priority: Blocker Fix For: 0.14.0 Attachments: HIVE-8324.1.patch, HIVE-8324.2.patch Unfortunately even after HIVE-8265, there are still more compile failures. {code} [ERROR] Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on project hive-service: Compilation failure: Compilation failure: [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[35,54] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: package org.apache.hadoop.security.authentication.util [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[241,7] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[241,43] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[252,7] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction [ERROR] /Users/szehon/svn-repos/trunk/service/src/java/org/apache/hive/service/cli/thrift/ThriftHttpServlet.java:[252,43] cannot find symbol [ERROR] symbol: class KerberosName [ERROR] location: class org.apache.hive.service.cli.thrift.ThriftHttpServlet.HttpKerberosServerAction {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-5536) Incorrect Operation Name is passed to hookcontext
[ https://issues.apache.org/jira/browse/HIVE-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158349#comment-14158349 ] Szehon Ho commented on HIVE-5536: - Hi Brock, I was just curious, can HookContext just directly return SessionState.get().getCommandType() for that method? Incorrect Operation Name is passed to hookcontext - Key: HIVE-5536 URL: https://issues.apache.org/jira/browse/HIVE-5536 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.11.0, 0.12.0 Reporter: Shreepadma Venugopalan Assignee: Brock Noland Attachments: HIVE-5536.2.patch, HIVE-5536.patch HS2 passes incorrect operation name to hookcontext. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8337) Change default of hive.warehouse.subdir.inherit.perms to true
[ https://issues.apache.org/jira/browse/HIVE-8337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158351#comment-14158351 ] Szehon Ho commented on HIVE-8337: - Seems fine with me, +1 Change default of hive.warehouse.subdir.inherit.perms to true - Key: HIVE-8337 URL: https://issues.apache.org/jira/browse/HIVE-8337 Project: Hive Issue Type: Improvement Affects Versions: 0.14.0 Reporter: Brock Noland Assignee: Brock Noland Attachments: HIVE-8337.2.patch, HIVE-8337.patch In my experience users want {{hive.warehouse.subdir.inherit.perms}} set to true since they want permissions to be inherited from the parent directory. Let's set the default value to true. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-7960) Upgrade to Hadoop 2.5
[ https://issues.apache.org/jira/browse/HIVE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158369#comment-14158369 ] Vikram Dixit K commented on HIVE-7960: -- +1 for 0.14 Upgrade to Hadoop 2.5 - Key: HIVE-7960 URL: https://issues.apache.org/jira/browse/HIVE-7960 Project: Hive Issue Type: Task Reporter: Brock Noland Assignee: Gunther Hagleitner Attachments: HIVE-7960.1.patch Tracking JIRA for upgrading to 2.5 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 26321: HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26321/#review55378 --- bin/ext/hiveserver2.cmd https://reviews.apache.org/r/26321/#comment95742 I don't think the -hiveconf paramaters should be part of the cmd. I see that you are trying to override the default value of some of these, but I think it is more appropriate to do that at the admin level (i.e. whichever admin service is managing the startup of hive). common/src/java/org/apache/hadoop/hive/conf/HiveConf.java https://reviews.apache.org/r/26321/#comment95740 Can you add a description for this new param (for why this is required) ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java https://reviews.apache.org/r/26321/#comment95741 Can you add a comment here explaining the intent of using HIVE_HADOOP_CLASSPATH? - Vaibhav Gumashta On Oct. 3, 2014, 6:27 p.m., XIAOBING ZHOU wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26321/ --- (Updated Oct. 3, 2014, 6:27 p.m.) Review request for hive. Repository: hive-git Description --- On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. Diffs - bin/ext/hiveserver2.cmd a5f3bb5 bin/hive.cmd c2e9853 common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 3fe67b2 ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java 79da5a0 Diff: https://reviews.apache.org/r/26321/diff/ Testing --- Thanks, XIAOBING ZHOU
[jira] [Commented] (HIVE-8340) HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
[ https://issues.apache.org/jira/browse/HIVE-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158384#comment-14158384 ] Vaibhav Gumashta commented on HIVE-8340: [~xiaobingo] Posted some comments on rb. Thanks for the patch. HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start. - Key: HIVE-8340 URL: https://issues.apache.org/jira/browse/HIVE-8340 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Windows Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8340.1.patch On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-7960) Upgrade to Hadoop 2.5
[ https://issues.apache.org/jira/browse/HIVE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158386#comment-14158386 ] Hive QA commented on HIVE-7960: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12672720/HIVE-7960.1.patch {color:red}ERROR:{color} -1 due to 1 failed/errored test(s), 6537 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestMinimrCliDriver.testCliDriver_schemeAuthority2 {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1105/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1105/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1105/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 1 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12672720 Upgrade to Hadoop 2.5 - Key: HIVE-7960 URL: https://issues.apache.org/jira/browse/HIVE-7960 Project: Hive Issue Type: Task Reporter: Brock Noland Assignee: Gunther Hagleitner Attachments: HIVE-7960.1.patch Tracking JIRA for upgrading to 2.5 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Review Request 26325: HiveServer2 dynamic service discovery should let the JDBC client use default ZooKeeper namespace
--- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26325/ --- Review request for hive and Thejas Nair. Bugs: HIVE-8172 https://issues.apache.org/jira/browse/HIVE-8172 Repository: hive-git Description --- https://issues.apache.org/jira/browse/HIVE-8172 Diffs - jdbc/src/java/org/apache/hive/jdbc/Utils.java e6b1a36 jdbc/src/java/org/apache/hive/jdbc/ZooKeeperHiveClientHelper.java 06795a5 Diff: https://reviews.apache.org/r/26325/diff/ Testing --- Thanks, Vaibhav Gumashta
[jira] [Updated] (HIVE-8172) HiveServer2 dynamic service discovery should let the JDBC client use default ZooKeeper namespace
[ https://issues.apache.org/jira/browse/HIVE-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-8172: --- Attachment: HIVE-8172.1.patch HiveServer2 dynamic service discovery should let the JDBC client use default ZooKeeper namespace Key: HIVE-8172 URL: https://issues.apache.org/jira/browse/HIVE-8172 Project: Hive Issue Type: Bug Components: HiveServer2, JDBC Affects Versions: 0.14.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Priority: Critical Labels: TODOC14 Fix For: 0.14.0 Attachments: HIVE-8172.1.patch Currently the client provides a url like: jdbc:hive2://vgumashta.local:2181,vgumashta.local:2182,vgumashta.local:2183/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2. The zooKeeperNamespace param when not provided should use the default value. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8172) HiveServer2 dynamic service discovery should let the JDBC client use default ZooKeeper namespace
[ https://issues.apache.org/jira/browse/HIVE-8172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vaibhav Gumashta updated HIVE-8172: --- Status: Patch Available (was: Open) HiveServer2 dynamic service discovery should let the JDBC client use default ZooKeeper namespace Key: HIVE-8172 URL: https://issues.apache.org/jira/browse/HIVE-8172 Project: Hive Issue Type: Bug Components: HiveServer2, JDBC Affects Versions: 0.14.0 Reporter: Vaibhav Gumashta Assignee: Vaibhav Gumashta Priority: Critical Labels: TODOC14 Fix For: 0.14.0 Attachments: HIVE-8172.1.patch Currently the client provides a url like: jdbc:hive2://vgumashta.local:2181,vgumashta.local:2182,vgumashta.local:2183/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2. The zooKeeperNamespace param when not provided should use the default value. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8331) HIVE-8303 followup, investigate result diff [Spark Branch]
[ https://issues.apache.org/jira/browse/HIVE-8331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158420#comment-14158420 ] Hive QA commented on HIVE-8331: --- {color:red}Overall{color}: -1 at least one tests failed Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12672806/HIVE-8331.2-spark.patch {color:red}ERROR:{color} -1 due to 2 failed/errored test(s), 6585 tests executed *Failed tests:* {noformat} org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_sample_islocalmode_hook org.apache.hadoop.hive.cli.TestNegativeCliDriver.testNegativeCliDriver_fs_default_name2 {noformat} Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/190/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-SPARK-Build/190/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-SPARK-Build-190/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase Tests exited with: TestsFailedException: 2 tests failed {noformat} This message is automatically generated. ATTACHMENT ID: 12672806 HIVE-8303 followup, investigate result diff [Spark Branch] -- Key: HIVE-8331 URL: https://issues.apache.org/jira/browse/HIVE-8331 Project: Hive Issue Type: Task Components: Spark Reporter: Xuefu Zhang Assignee: Chao Attachments: HIVE-8331.1-spark.patch, HIVE-8331.2-spark.patch HIVE-8303 patch introduced some result diffs for some spark tests. We need to investigate those, including parallel_join0.q, union22.q, vectorized_shufflejoin.q, union_remove_18.q, and maybe more. Also the investigation includes the test failures related to spark. Specifically, union_remove_18.q demonstrated random order. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8340) HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
[ https://issues.apache.org/jira/browse/HIVE-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158440#comment-14158440 ] Xiaobing Zhou commented on HIVE-8340: - [~hsubramaniyan] I think you had more knowledge to clear [~vgumashta]'s doubts in review board. Can you comment on this? Thanks! HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start. - Key: HIVE-8340 URL: https://issues.apache.org/jira/browse/HIVE-8340 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Windows Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8340.1.patch On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-5536) Incorrect Operation Name is passed to hookcontext
[ https://issues.apache.org/jira/browse/HIVE-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158449#comment-14158449 ] Brock Noland commented on HIVE-5536: [~szehon] the reason is that SessionState is not correct when the hooks run due to HIVE-2286. I think changing that behavior could have big implications... Incorrect Operation Name is passed to hookcontext - Key: HIVE-5536 URL: https://issues.apache.org/jira/browse/HIVE-5536 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.11.0, 0.12.0 Reporter: Shreepadma Venugopalan Assignee: Brock Noland Attachments: HIVE-5536.2.patch, HIVE-5536.patch HS2 passes incorrect operation name to hookcontext. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-5536) Incorrect Operation Name is passed to hookcontext
[ https://issues.apache.org/jira/browse/HIVE-5536?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158450#comment-14158450 ] Szehon Ho commented on HIVE-5536: - Thanks for the clarification, +1 Incorrect Operation Name is passed to hookcontext - Key: HIVE-5536 URL: https://issues.apache.org/jira/browse/HIVE-5536 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.11.0, 0.12.0 Reporter: Shreepadma Venugopalan Assignee: Brock Noland Attachments: HIVE-5536.2.patch, HIVE-5536.patch HS2 passes incorrect operation name to hookcontext. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-7960) Upgrade to Hadoop 2.5
[ https://issues.apache.org/jira/browse/HIVE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158469#comment-14158469 ] Gunther Hagleitner commented on HIVE-7960: -- I think this failure is unrelated. I've seen it happen before and I've run the test multiple times locally w/o failure. Upgrade to Hadoop 2.5 - Key: HIVE-7960 URL: https://issues.apache.org/jira/browse/HIVE-7960 Project: Hive Issue Type: Task Reporter: Brock Noland Assignee: Gunther Hagleitner Attachments: HIVE-7960.1.patch Tracking JIRA for upgrading to 2.5 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7960) Upgrade to Hadoop 2.5
[ https://issues.apache.org/jira/browse/HIVE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-7960: - Resolution: Fixed Fix Version/s: 0.14.0 Status: Resolved (was: Patch Available) Committed to trunk and .14 Upgrade to Hadoop 2.5 - Key: HIVE-7960 URL: https://issues.apache.org/jira/browse/HIVE-7960 Project: Hive Issue Type: Task Reporter: Brock Noland Assignee: Gunther Hagleitner Fix For: 0.14.0 Attachments: HIVE-7960.1.patch Tracking JIRA for upgrading to 2.5 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-7184) TestHadoop20SAuthBridge no longer compiles after HADOOP-10448
[ https://issues.apache.org/jira/browse/HIVE-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner resolved HIVE-7184. -- Resolution: Fixed TestHadoop20SAuthBridge no longer compiles after HADOOP-10448 - Key: HIVE-7184 URL: https://issues.apache.org/jira/browse/HIVE-7184 Project: Hive Issue Type: Sub-task Components: Tests Affects Versions: 0.14.0 Reporter: Jason Dere Assignee: Jason Dere Attachments: HIVE-7184.1.patch, HIVE-7184.2.patch HADOOP-10448 moves a couple of methods which were being used by the TestHadoop20SAuthBridge test. If/when Hive build uses Hadoop 2.5 as a dependency, this will cause compilation errors. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7184) TestHadoop20SAuthBridge no longer compiles after HADOOP-10448
[ https://issues.apache.org/jira/browse/HIVE-7184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-7184: - Fix Version/s: 0.14.0 TestHadoop20SAuthBridge no longer compiles after HADOOP-10448 - Key: HIVE-7184 URL: https://issues.apache.org/jira/browse/HIVE-7184 Project: Hive Issue Type: Sub-task Components: Tests Affects Versions: 0.14.0 Reporter: Jason Dere Assignee: Jason Dere Fix For: 0.14.0 Attachments: HIVE-7184.1.patch, HIVE-7184.2.patch HADOOP-10448 moves a couple of methods which were being used by the TestHadoop20SAuthBridge test. If/when Hive build uses Hadoop 2.5 as a dependency, this will cause compilation errors. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-7873) Re-enable lazy HiveBaseFunctionResultList
[ https://issues.apache.org/jira/browse/HIVE-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158489#comment-14158489 ] Brock Noland commented on HIVE-7873: I talked to [~jxiang] offline and he said he was interested in this one. Re-enable lazy HiveBaseFunctionResultList - Key: HIVE-7873 URL: https://issues.apache.org/jira/browse/HIVE-7873 Project: Hive Issue Type: Sub-task Reporter: Brock Noland Assignee: Jimmy Xiang Labels: Spark-M4, spark We removed this optimization in HIVE-7799. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7873) Re-enable lazy HiveBaseFunctionResultList
[ https://issues.apache.org/jira/browse/HIVE-7873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brock Noland updated HIVE-7873: --- Assignee: Jimmy Xiang Re-enable lazy HiveBaseFunctionResultList - Key: HIVE-7873 URL: https://issues.apache.org/jira/browse/HIVE-7873 Project: Hive Issue Type: Sub-task Reporter: Brock Noland Assignee: Jimmy Xiang Labels: Spark-M4, spark We removed this optimization in HIVE-7799. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8340) HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
[ https://issues.apache.org/jira/browse/HIVE-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158506#comment-14158506 ] Hari Sankar Sivarama Subramaniyan commented on HIVE-8340: - [~vgumashta] hiveserver2.cmd changes that [~xiaobingo] made are used to generate the hiveserver2.xml file used as an input config file to the service host bin to start up hiveserver2 service in windows. Since the changes in hiveserver2.cmd are specific to hiveserver2 and cannot be added as part of hive-site.xml(since this will affect queries run via cli), I believe the changes made by [~xiaobingo] should be fine. Thanks Hari HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start. - Key: HIVE-8340 URL: https://issues.apache.org/jira/browse/HIVE-8340 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Windows Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8340.1.patch On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8340) HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
[ https://issues.apache.org/jira/browse/HIVE-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158513#comment-14158513 ] Vaibhav Gumashta commented on HIVE-8340: [~hsubramaniyan] Cool. I'll mark that comment as resolved. HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start. - Key: HIVE-8340 URL: https://issues.apache.org/jira/browse/HIVE-8340 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Windows Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8340.1.patch On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 26321: HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
On Oct. 3, 2014, 7:06 p.m., Vaibhav Gumashta wrote: bin/ext/hiveserver2.cmd, line 82 https://reviews.apache.org/r/26321/diff/1/?file=713542#file713542line82 I don't think the -hiveconf paramaters should be part of the cmd. I see that you are trying to override the default value of some of these, but I think it is more appropriate to do that at the admin level (i.e. whichever admin service is managing the startup of hive). Explained by Hari here: https://issues.apache.org/jira/browse/HIVE-8340?focusedCommentId=14158506page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14158506 - Vaibhav --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26321/#review55378 --- On Oct. 3, 2014, 6:27 p.m., XIAOBING ZHOU wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26321/ --- (Updated Oct. 3, 2014, 6:27 p.m.) Review request for hive. Repository: hive-git Description --- On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. Diffs - bin/ext/hiveserver2.cmd a5f3bb5 bin/hive.cmd c2e9853 common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 3fe67b2 ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java 79da5a0 Diff: https://reviews.apache.org/r/26321/diff/ Testing --- Thanks, XIAOBING ZHOU
[jira] [Assigned] (HIVE-8121) Create micro-benchmarks for ParquetSerde and evaluate performance
[ https://issues.apache.org/jira/browse/HIVE-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sergio Peña reassigned HIVE-8121: - Assignee: Sergio Peña Create micro-benchmarks for ParquetSerde and evaluate performance - Key: HIVE-8121 URL: https://issues.apache.org/jira/browse/HIVE-8121 Project: Hive Issue Type: Sub-task Reporter: Brock Noland Assignee: Sergio Peña These benchmarks should not execute queries but test only the ParquetSerde code to ensure we are as efficient as possible. The output of this JIRA is: 1) Benchmark tool exists 2) We create new tasks under HIVE-8120 to track the improvements required -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-3781) Index related events should be delivered to metastore event listener
[ https://issues.apache.org/jira/browse/HIVE-3781?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158582#comment-14158582 ] Hive QA commented on HIVE-3781: --- {color:green}Overall{color}: +1 all checks pass Here are the results of testing the latest attachment: https://issues.apache.org/jira/secure/attachment/12672726/HIVE-3781.7.patch.txt {color:green}SUCCESS:{color} +1 6522 tests passed Test results: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1106/testReport Console output: http://ec2-174-129-184-35.compute-1.amazonaws.com/jenkins/job/PreCommit-HIVE-TRUNK-Build/1106/console Test logs: http://ec2-174-129-184-35.compute-1.amazonaws.com/logs/PreCommit-HIVE-TRUNK-Build-1106/ Messages: {noformat} Executing org.apache.hive.ptest.execution.PrepPhase Executing org.apache.hive.ptest.execution.ExecutionPhase Executing org.apache.hive.ptest.execution.ReportingPhase {noformat} This message is automatically generated. ATTACHMENT ID: 12672726 Index related events should be delivered to metastore event listener Key: HIVE-3781 URL: https://issues.apache.org/jira/browse/HIVE-3781 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.9.0 Reporter: Sudhanshu Arora Assignee: Navis Attachments: HIVE-3781.5.patch.txt, HIVE-3781.6.patch.txt, HIVE-3781.7.patch.txt, HIVE-3781.D7731.1.patch, HIVE-3781.D7731.2.patch, HIVE-3781.D7731.3.patch, HIVE-3781.D7731.4.patch, hive.3781.3.patch, hive.3781.4.patch An event listener must be called for any DDL activity. For example, create_index, drop_index today does not call metaevent listener. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7800) Parquet Column Index Access Schema Size Checking
[ https://issues.apache.org/jira/browse/HIVE-7800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Weeks updated HIVE-7800: --- Attachment: HIVE-7800.3.patch Updated patch that resolves index access issues. Parquet Column Index Access Schema Size Checking Key: HIVE-7800 URL: https://issues.apache.org/jira/browse/HIVE-7800 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Fix For: 0.14.0 Attachments: HIVE-7800.1.patch, HIVE-7800.2.patch, HIVE-7800.3.patch In the case that a parquet formatted table has partitions where the files have different size schema, using column index access can result in an index out of bounds exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8341) Transaction information in config file can grow excessively large
Alan Gates created HIVE-8341: Summary: Transaction information in config file can grow excessively large Key: HIVE-8341 URL: https://issues.apache.org/jira/browse/HIVE-8341 Project: Hive Issue Type: Bug Components: Transactions Affects Versions: 0.14.0 Reporter: Alan Gates Assignee: Alan Gates Priority: Critical In our testing we have seen cases where the transaction list grows very large. We need a more efficient way of communicating the list. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
Re: Review Request 26321: HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
On Oct. 3, 2014, 7:06 p.m., Vaibhav Gumashta wrote: common/src/java/org/apache/hadoop/hive/conf/HiveConf.java, line 1726 https://reviews.apache.org/r/26321/diff/1/?file=713544#file713544line1726 Can you add a description for this new param (for why this is required) For Windows OS, we need to pass HIVE_HADOOP_CLASSPATH java parameter while starting hiveserver2 using -hiveconf hive.hadoop.classpath=%HIVE_LIB%. This is where it's defined. On Oct. 3, 2014, 7:06 p.m., Vaibhav Gumashta wrote: ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java, line 241 https://reviews.apache.org/r/26321/diff/1/?file=713545#file713545line241 Can you add a comment here explaining the intent of using HIVE_HADOOP_CLASSPATH? For Windows OS, we need to pass HIVE_HADOOP_CLASSPATH java parameter while starting hiveserver2 using -hiveconf hive.hadoop.classpath=%HIVE_LIB%. This is how it's used. - XIAOBING --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26321/#review55378 --- On Oct. 3, 2014, 6:27 p.m., XIAOBING ZHOU wrote: --- This is an automatically generated e-mail. To reply, visit: https://reviews.apache.org/r/26321/ --- (Updated Oct. 3, 2014, 6:27 p.m.) Review request for hive. Repository: hive-git Description --- On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. Diffs - bin/ext/hiveserver2.cmd a5f3bb5 bin/hive.cmd c2e9853 common/src/java/org/apache/hadoop/hive/conf/HiveConf.java 3fe67b2 ql/src/java/org/apache/hadoop/hive/ql/exec/mr/MapredLocalTask.java 79da5a0 Diff: https://reviews.apache.org/r/26321/diff/ Testing --- Thanks, XIAOBING ZHOU
[jira] [Commented] (HIVE-7800) Parquet Column Index Access Schema Size Checking
[ https://issues.apache.org/jira/browse/HIVE-7800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158594#comment-14158594 ] Daniel Weeks commented on HIVE-7800: This patch actually resolves a few different issues: 1) If the file schema size and table schema size differ across partitions, it no longer throws an index out of bounds. 2) There was an odd case where if the calculated input splits resulted in a mapper not processing the first split (due to the row group boundary checking), the array writable used to back the materialized rows would be initialized as the full table length as opposed to projected column length. In the column index access case this caused problems due to not being able to handle that case. 3) There was a check included previously that didn't allow the file schema to vary from the table schema (i.e. could not request a column that doesn't exist in the underlying file). This doesn't allow for schema evolution and was removed. Columns missing from the file schema should be null padded in the final result. Parquet Column Index Access Schema Size Checking Key: HIVE-7800 URL: https://issues.apache.org/jira/browse/HIVE-7800 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Fix For: 0.14.0 Attachments: HIVE-7800.1.patch, HIVE-7800.2.patch, HIVE-7800.3.patch In the case that a parquet formatted table has partitions where the files have different size schema, using column index access can result in an index out of bounds exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8169) Windows: alter table ..set location from hcatalog failed with NullPointerException
[ https://issues.apache.org/jira/browse/HIVE-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158593#comment-14158593 ] Alan Gates commented on HIVE-8169: -- [~vikram.dixit], I'd like to commit this to 0.14 as it causes an NPE Windows: alter table ..set location from hcatalog failed with NullPointerException -- Key: HIVE-8169 URL: https://issues.apache.org/jira/browse/HIVE-8169 Project: Hive Issue Type: Bug Components: HCatalog Affects Versions: 0.14.0 Environment: Windows Server 2008 R2 Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8169.1.patch, HIVE-8169.2.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-7800) Parquet Column Index Access Schema Size Checking
[ https://issues.apache.org/jira/browse/HIVE-7800?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158599#comment-14158599 ] Daniel Weeks commented on HIVE-7800: One more in the list 4) Certain operations (group by + order by) lose the hive schema in the configuration, so the table information isn't available in the 'prepareForRead' and column index access resolution didn't work. Parquet Column Index Access Schema Size Checking Key: HIVE-7800 URL: https://issues.apache.org/jira/browse/HIVE-7800 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Fix For: 0.14.0 Attachments: HIVE-7800.1.patch, HIVE-7800.2.patch, HIVE-7800.3.patch In the case that a parquet formatted table has partitions where the files have different size schema, using column index access can result in an index out of bounds exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8341) Transaction information in config file can grow excessively large
[ https://issues.apache.org/jira/browse/HIVE-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-8341: - Attachment: HIVE-8341.patch This patch changes ValidTxnListImpl to compress the string it produces if the string is 256 bytes. This is not a panacea, but it should reduce the frequency of the issue. [~hagleitn] and [~mmokhtar], you may want to review this since you brought the problem to my attention. Transaction information in config file can grow excessively large - Key: HIVE-8341 URL: https://issues.apache.org/jira/browse/HIVE-8341 Project: Hive Issue Type: Bug Components: Transactions Affects Versions: 0.14.0 Reporter: Alan Gates Assignee: Alan Gates Priority: Critical Attachments: HIVE-8341.patch In our testing we have seen cases where the transaction list grows very large. We need a more efficient way of communicating the list. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8341) Transaction information in config file can grow excessively large
[ https://issues.apache.org/jira/browse/HIVE-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-8341: - Status: Patch Available (was: Open) Transaction information in config file can grow excessively large - Key: HIVE-8341 URL: https://issues.apache.org/jira/browse/HIVE-8341 Project: Hive Issue Type: Bug Components: Transactions Affects Versions: 0.14.0 Reporter: Alan Gates Assignee: Alan Gates Priority: Critical Attachments: HIVE-8341.patch In our testing we have seen cases where the transaction list grows very large. We need a more efficient way of communicating the list. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7800) Parquet Column Index Access Schema Size Checking
[ https://issues.apache.org/jira/browse/HIVE-7800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Weeks updated HIVE-7800: --- Status: Patch Available (was: Open) Parquet Column Index Access Schema Size Checking Key: HIVE-7800 URL: https://issues.apache.org/jira/browse/HIVE-7800 Project: Hive Issue Type: Bug Affects Versions: 0.14.0 Reporter: Daniel Weeks Assignee: Daniel Weeks Priority: Critical Fix For: 0.14.0 Attachments: HIVE-7800.1.patch, HIVE-7800.2.patch, HIVE-7800.3.patch In the case that a parquet formatted table has partitions where the files have different size schema, using column index access can result in an index out of bounds exception. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8342) Potential null dereference in ColumnTruncateMapper#jobClose()
Ted Yu created HIVE-8342: Summary: Potential null dereference in ColumnTruncateMapper#jobClose() Key: HIVE-8342 URL: https://issues.apache.org/jira/browse/HIVE-8342 Project: Hive Issue Type: Bug Reporter: Ted Yu Priority: Minor {code} Utilities.mvFileToFinalPath(outputPath, job, success, LOG, dynPartCtx, null, reporter); {code} Utilities.mvFileToFinalPath() calls createEmptyBuckets() where conf is dereferenced: {code} boolean isCompressed = conf.getCompressed(); TableDesc tableInfo = conf.getTableInfo(); {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8343) Return value from BlockingQueue.offer() is not checked in DynamicPartitionPruner
Ted Yu created HIVE-8343: Summary: Return value from BlockingQueue.offer() is not checked in DynamicPartitionPruner Key: HIVE-8343 URL: https://issues.apache.org/jira/browse/HIVE-8343 Project: Hive Issue Type: Bug Reporter: Ted Yu Priority: Minor In addEvent() and processVertex(), there is call such as the following: {code} queue.offer(event); {code} The return value should be checked. If false is returned, event would not have been queued. Take a look at line 328 in: http://fuseyism.com/classpath/doc/java/util/concurrent/LinkedBlockingQueue-source.html -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8344) Hive on Tez sets mapreduce.framework.name to yarn-tez
Gunther Hagleitner created HIVE-8344: Summary: Hive on Tez sets mapreduce.framework.name to yarn-tez Key: HIVE-8344 URL: https://issues.apache.org/jira/browse/HIVE-8344 Project: Hive Issue Type: Bug Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner This was done to run MR jobs when in Tez mode (emulate MR on Tez). However, we don't switch back when the user specifies MR as exec engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8344) Hive on Tez sets mapreduce.framework.name to yarn-tez
[ https://issues.apache.org/jira/browse/HIVE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-8344: - Status: Patch Available (was: Open) Hive on Tez sets mapreduce.framework.name to yarn-tez - Key: HIVE-8344 URL: https://issues.apache.org/jira/browse/HIVE-8344 Project: Hive Issue Type: Bug Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Attachments: HIVE-8344.1.patch This was done to run MR jobs when in Tez mode (emulate MR on Tez). However, we don't switch back when the user specifies MR as exec engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8344) Hive on Tez sets mapreduce.framework.name to yarn-tez
[ https://issues.apache.org/jira/browse/HIVE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gunther Hagleitner updated HIVE-8344: - Attachment: HIVE-8344.1.patch Hive on Tez sets mapreduce.framework.name to yarn-tez - Key: HIVE-8344 URL: https://issues.apache.org/jira/browse/HIVE-8344 Project: Hive Issue Type: Bug Reporter: Gunther Hagleitner Assignee: Gunther Hagleitner Attachments: HIVE-8344.1.patch This was done to run MR jobs when in Tez mode (emulate MR on Tez). However, we don't switch back when the user specifies MR as exec engine. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-8345) q-test for Avro date support
Mohit Sabharwal created HIVE-8345: - Summary: q-test for Avro date support Key: HIVE-8345 URL: https://issues.apache.org/jira/browse/HIVE-8345 Project: Hive Issue Type: Bug Components: Serializers/Deserializers Reporter: Mohit Sabharwal Assignee: Mohit Sabharwal HIVE-8130 commit missed q-test related files. Adding those in this patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-6669) sourcing txn-script from schema script results in failure for mysql oracle
[ https://issues.apache.org/jira/browse/HIVE-6669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-6669: - Attachment: HIVE-6669.patch This patch adds hive-txn-schema-0.14 scripts, which are all identical to hive-txn-schema-0.13 scripts, but they are added for completeness. The transaction tables are also added to the hive-schema-0.14 scripts. [~damien.carol], please review this patch as I changed the transaction tables for postgres to lower case so that they would work without requiring quotes in TxnHandler/CompactionTxnHandler. I can't reproduce your errors (my version of postgres doesn't seem to care about upper/lower case), so I wanted to have you check it before I commit this. sourcing txn-script from schema script results in failure for mysql oracle Key: HIVE-6669 URL: https://issues.apache.org/jira/browse/HIVE-6669 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.14.0 Reporter: Prasad Mujumdar Assignee: Alan Gates Priority: Blocker Attachments: HIVE-6669.patch This issues is addressed in 0.13 by in-lining the the transaction schema statements in the schema initialization script (HIVE-6559) The 0.14 schema initialization is not fixed. This is the followup ticket for to address the problem in 0.14. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8340) HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start.
[ https://issues.apache.org/jira/browse/HIVE-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HIVE-8340: Attachment: HIVE-8340.2.patch Posted 2nd patch to be reviewed. Thanks! HiveServer2 service doesn't stop backend jvm process, which prevents follow-up service start. - Key: HIVE-8340 URL: https://issues.apache.org/jira/browse/HIVE-8340 Project: Hive Issue Type: Bug Components: HiveServer2 Affects Versions: 0.14.0 Environment: Windows Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8340.1.patch, HIVE-8340.2.patch On stopping the HS2 service from the services tab, it only kills the root process and does not kill the child java process. As a result resources are not freed and this throws an error on restarting from command line. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8345) q-test for Avro date support
[ https://issues.apache.org/jira/browse/HIVE-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohit Sabharwal updated HIVE-8345: -- Attachment: HIVE-8345.patch q-test for Avro date support Key: HIVE-8345 URL: https://issues.apache.org/jira/browse/HIVE-8345 Project: Hive Issue Type: Bug Components: Serializers/Deserializers Reporter: Mohit Sabharwal Assignee: Mohit Sabharwal Attachments: HIVE-8345.patch HIVE-8130 commit missed q-test related files. Adding those in this patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8345) q-test for Avro date support
[ https://issues.apache.org/jira/browse/HIVE-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mohit Sabharwal updated HIVE-8345: -- Status: Patch Available (was: Open) q-test for Avro date support Key: HIVE-8345 URL: https://issues.apache.org/jira/browse/HIVE-8345 Project: Hive Issue Type: Bug Components: Serializers/Deserializers Reporter: Mohit Sabharwal Assignee: Mohit Sabharwal Attachments: HIVE-8345.patch HIVE-8130 commit missed q-test related files. Adding those in this patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8345) q-test for Avro date support
[ https://issues.apache.org/jira/browse/HIVE-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158662#comment-14158662 ] Xuefu Zhang commented on HIVE-8345: --- +1 q-test for Avro date support Key: HIVE-8345 URL: https://issues.apache.org/jira/browse/HIVE-8345 Project: Hive Issue Type: Bug Components: Serializers/Deserializers Reporter: Mohit Sabharwal Assignee: Mohit Sabharwal Attachments: HIVE-8345.patch HIVE-8130 commit missed q-test related files. Adding those in this patch. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8169) Windows: alter table ..set location from hcatalog failed with NullPointerException
[ https://issues.apache.org/jira/browse/HIVE-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alan Gates updated HIVE-8169: - Resolution: Fixed Status: Resolved (was: Patch Available) Patch committed to 0.14 branch and trunk. Thank's Xiaobing for the patch. Windows: alter table ..set location from hcatalog failed with NullPointerException -- Key: HIVE-8169 URL: https://issues.apache.org/jira/browse/HIVE-8169 Project: Hive Issue Type: Bug Components: HCatalog Affects Versions: 0.14.0 Environment: Windows Server 2008 R2 Reporter: Xiaobing Zhou Assignee: Xiaobing Zhou Priority: Critical Fix For: 0.14.0 Attachments: HIVE-8169.1.patch, HIVE-8169.2.patch -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (HIVE-8197) Tez and Vectorization Insert into ORC Table with timestamp column erroneously repeats the last row's column value
[ https://issues.apache.org/jira/browse/HIVE-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Matt McCline resolved HIVE-8197. Resolution: Cannot Reproduce Tez and Vectorization Insert into ORC Table with timestamp column erroneously repeats the last row's column value - Key: HIVE-8197 URL: https://issues.apache.org/jira/browse/HIVE-8197 Project: Hive Issue Type: Bug Environment: Tez and Vectorization. Reporter: Matt McCline Assignee: Matt McCline Priority: Critical In diagnosing why a only(?) a Tez and Vectorized query with min and max aggregates was always returning the last row read's column value, discovered the problem was in creating the test table {code} CREATE TABLE alltypesorc_string STORED AS ORC AS SELECT ctinyint as ctinyint, to_utc_timestamp(ctimestamp1, 'America/Los_Angeles') as ctimestamp1, CAST(to_utc_timestamp(ctimestamp1, 'America/Los_Angeles') AS STRING) as stimestamp1 FROM alltypesorc WHERE ctinyint 0 LIMIT 40; {code} I think it is related what Prasanth mentioned as a possibility: Saving a Timestamp as a Writable object that gets overwritten. One suspect is the Writable[] records array in VectorFileSinkOperator in the ProcessOp method. Or, perhaps it is in VectorReduceSinkOperator. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-8197) Tez and Vectorization Insert into ORC Table with timestamp column erroneously repeats the last row's column value
[ https://issues.apache.org/jira/browse/HIVE-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14158665#comment-14158665 ] Matt McCline commented on HIVE-8197: No longer repros. Fixed with earlier change that simplified VectorFileSinkOperator to just forward rows rather than buffer them in VectorOrcSerde. Tez and Vectorization Insert into ORC Table with timestamp column erroneously repeats the last row's column value - Key: HIVE-8197 URL: https://issues.apache.org/jira/browse/HIVE-8197 Project: Hive Issue Type: Bug Environment: Tez and Vectorization. Reporter: Matt McCline Assignee: Matt McCline Priority: Critical In diagnosing why a only(?) a Tez and Vectorized query with min and max aggregates was always returning the last row read's column value, discovered the problem was in creating the test table {code} CREATE TABLE alltypesorc_string STORED AS ORC AS SELECT ctinyint as ctinyint, to_utc_timestamp(ctimestamp1, 'America/Los_Angeles') as ctimestamp1, CAST(to_utc_timestamp(ctimestamp1, 'America/Los_Angeles') AS STRING) as stimestamp1 FROM alltypesorc WHERE ctinyint 0 LIMIT 40; {code} I think it is related what Prasanth mentioned as a possibility: Saving a Timestamp as a Writable object that gets overwritten. One suspect is the Writable[] records array in VectorFileSinkOperator in the ProcessOp method. Or, perhaps it is in VectorReduceSinkOperator. -- This message was sent by Atlassian JIRA (v6.3.4#6332)