[jira] [Updated] (HIVE-8439) query processor fails to handle multiple insert clauses for the same table

2014-10-28 Thread Gordon Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gordon Wang updated HIVE-8439:
--
Summary: query processor fails to handle multiple insert clauses for the 
same table  (was: multiple insert into the same table)

 query processor fails to handle multiple insert clauses for the same table
 --

 Key: HIVE-8439
 URL: https://issues.apache.org/jira/browse/HIVE-8439
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0, 0.13.0
Reporter: Gordon Wang

 when putting multiple inserts for the same table in one SQL, hive query plan 
 analyzer fails to synthesis the right plan.
 Here is the reproduce steps.
 {noformat}
 create table T1(i int, j int);
 create table T2(m int) partitioned by (n int);
 explain from T1
 insert into table T2 partition (n = 1)
   select T1.i where T1.j = 1
 insert overwrite table T2 partition (n = 2)
   select T1.i where T1.j = 2
   ;
 {noformat}
 When there is a insert into clause in the multiple insert part, the insert 
 overwrite is considered as insert into.
 I dig into the source code, looks like Hive does not support mixing insert 
 into and insert overwrite for the same table in multiple insert clauses.
 Here is my finding.
 1. in semantic analyzer, when processing TOK_INSERT_INTO, the analyzer will 
 put the table name into a set which contains all the insert into table names.
 2. when generating file sink plan, the analyzer will check if the table name 
 is in the set, if in the set, the replace flag is set to false. Here is the 
 code snippet.
 {noformat}
   // Create the work for moving the table
   // NOTE: specify Dynamic partitions in dest_tab for WriteEntity
   if (!isNonNativeTable) {
 ltd = new LoadTableDesc(queryTmpdir, 
 ctx.getExternalTmpFileURI(dest_path.toUri()),
 table_desc, dpCtx);
 
 ltd.setReplace(!qb.getParseInfo().isInsertIntoTable(dest_tab.getDbName(),
 dest_tab.getTableName()));
 ltd.setLbCtx(lbCtx);
 if (holdDDLTime) {
   LOG.info(this query will not update transient_lastDdlTime!);
   ltd.setHoldDDLTime(true);
 }
 loadTableWork.add(ltd);
   }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HIVE-8439) multiple insert into the same table

2014-10-27 Thread Gordon Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HIVE-8439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gordon Wang updated HIVE-8439:
--
Description: 
when putting multiple inserts for the same table in one SQL, hive query plan 
analyzer fails to synthesis the right plan.

Here is the reproduce steps.
{noformat}
create table T1(i int, j int);
create table T2(m int) partitioned by (n int);
explain from T1
insert into table T2 partition (n = 1)
  select T1.i where T1.j = 1
insert overwrite table T2 partition (n = 2)
  select T1.i where T1.j = 2
  ;
{noformat}
When there is a insert into clause in the multiple insert part, the insert 
overwrite is considered as insert into.

I dig into the source code, looks like Hive does not support mixing insert 
into and insert overwrite for the same table in multiple insert clauses.

Here is my finding.
1. in semantic analyzer, when processing TOK_INSERT_INTO, the analyzer will put 
the table name into a set which contains all the insert into table names.
2. when generating file sink plan, the analyzer will check if the table name is 
in the set, if in the set, the replace flag is set to false. Here is the code 
snippet.
{noformat}
  // Create the work for moving the table
  // NOTE: specify Dynamic partitions in dest_tab for WriteEntity
  if (!isNonNativeTable) {
ltd = new LoadTableDesc(queryTmpdir, 
ctx.getExternalTmpFileURI(dest_path.toUri()),
table_desc, dpCtx);

ltd.setReplace(!qb.getParseInfo().isInsertIntoTable(dest_tab.getDbName(),
dest_tab.getTableName()));
ltd.setLbCtx(lbCtx);

if (holdDDLTime) {
  LOG.info(this query will not update transient_lastDdlTime!);
  ltd.setHoldDDLTime(true);
}
loadTableWork.add(ltd);
  }
{noformat}

  was:
when putting multiple inserts for the same table in one SQL, hive query plan 
analyzer fails to synthesis the right plan.

Here is the reproduce steps.
{noformat}
create table T1(i int, j int);
create table T2(m int) partitioned by (n int);
explain from T1
insert into table T2 partition (n = 1)
  select T1.i where T1.j = 1
insert overwrite table T2 partition (n = 2)
  select T1.i where T1.j = 2
  ;
{noformat}
When there is a insert into clause in the multiple insert part, the insert 
overwrite is considered as insert into.

I dig into the source code, looks like Hive does not support mixing insert 
into and insert overwrite for the same table in multiple insert clauses.

Here is my finding.
1. in semantic analyzer, when processing TOK_INSERT_INTO, the analyzer will put 
the table name into a set which contains all the insert into table names.
2. when generate file sink plan, the analyzer will check if the table name is 
in the set, if in the set, the replace flag is set to false. Here is the code 
snippet.
{noformat}
  // Create the work for moving the table
  // NOTE: specify Dynamic partitions in dest_tab for WriteEntity
  if (!isNonNativeTable) {
ltd = new LoadTableDesc(queryTmpdir, 
ctx.getExternalTmpFileURI(dest_path.toUri()),
table_desc, dpCtx);

ltd.setReplace(!qb.getParseInfo().isInsertIntoTable(dest_tab.getDbName(),
dest_tab.getTableName()));
ltd.setLbCtx(lbCtx);

if (holdDDLTime) {
  LOG.info(this query will not update transient_lastDdlTime!);
  ltd.setHoldDDLTime(true);
}
loadTableWork.add(ltd);
  }
{noformat}


 multiple insert into the same table
 ---

 Key: HIVE-8439
 URL: https://issues.apache.org/jira/browse/HIVE-8439
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0, 0.13.0
Reporter: Gordon Wang

 when putting multiple inserts for the same table in one SQL, hive query plan 
 analyzer fails to synthesis the right plan.
 Here is the reproduce steps.
 {noformat}
 create table T1(i int, j int);
 create table T2(m int) partitioned by (n int);
 explain from T1
 insert into table T2 partition (n = 1)
   select T1.i where T1.j = 1
 insert overwrite table T2 partition (n = 2)
   select T1.i where T1.j = 2
   ;
 {noformat}
 When there is a insert into clause in the multiple insert part, the insert 
 overwrite is considered as insert into.
 I dig into the source code, looks like Hive does not support mixing insert 
 into and insert overwrite for the same table in multiple insert clauses.
 Here is my finding.
 1. in semantic analyzer, when processing TOK_INSERT_INTO, the analyzer will 
 put the table name into a set which contains all the insert into table names.
 2. when generating file sink plan, the analyzer will check if the table name 
 is in the set, if in the set, the replace flag is set to false. Here is the 
 code snippet.
 {noformat}
   // Create the work for moving the table
   // NOTE: specify Dynamic 

[jira] [Commented] (HIVE-8439) multiple insert into the same table

2014-10-27 Thread Gordon Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14184938#comment-14184938
 ] 

Gordon Wang commented on HIVE-8439:
---

currently, hive semantic analyzer can not handle multiple insert clause 
correctly. When mixing INSERT INTO and INSERT OVERWRITE with the same 
table, semantic analyzer can not aware which clause is OVERWRITE.

Some more information about overwrite clause should be recorded in QueryBlock.

 multiple insert into the same table
 ---

 Key: HIVE-8439
 URL: https://issues.apache.org/jira/browse/HIVE-8439
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.12.0, 0.13.0
Reporter: Gordon Wang

 when putting multiple inserts for the same table in one SQL, hive query plan 
 analyzer fails to synthesis the right plan.
 Here is the reproduce steps.
 {noformat}
 create table T1(i int, j int);
 create table T2(m int) partitioned by (n int);
 explain from T1
 insert into table T2 partition (n = 1)
   select T1.i where T1.j = 1
 insert overwrite table T2 partition (n = 2)
   select T1.i where T1.j = 2
   ;
 {noformat}
 When there is a insert into clause in the multiple insert part, the insert 
 overwrite is considered as insert into.
 I dig into the source code, looks like Hive does not support mixing insert 
 into and insert overwrite for the same table in multiple insert clauses.
 Here is my finding.
 1. in semantic analyzer, when processing TOK_INSERT_INTO, the analyzer will 
 put the table name into a set which contains all the insert into table names.
 2. when generating file sink plan, the analyzer will check if the table name 
 is in the set, if in the set, the replace flag is set to false. Here is the 
 code snippet.
 {noformat}
   // Create the work for moving the table
   // NOTE: specify Dynamic partitions in dest_tab for WriteEntity
   if (!isNonNativeTable) {
 ltd = new LoadTableDesc(queryTmpdir, 
 ctx.getExternalTmpFileURI(dest_path.toUri()),
 table_desc, dpCtx);
 
 ltd.setReplace(!qb.getParseInfo().isInsertIntoTable(dest_tab.getDbName(),
 dest_tab.getTableName()));
 ltd.setLbCtx(lbCtx);
 if (holdDDLTime) {
   LOG.info(this query will not update transient_lastDdlTime!);
   ltd.setHoldDDLTime(true);
 }
 loadTableWork.add(ltd);
   }
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8532) return code of source xxx clause is missing

2014-10-23 Thread Gordon Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14182375#comment-14182375
 ] 

Gordon Wang commented on HIVE-8532:
---

Looks like the UT failure is not caused by this patch. The failure UT is not in 
the changed code path.

 return code of source xxx clause is missing
 -

 Key: HIVE-8532
 URL: https://issues.apache.org/jira/browse/HIVE-8532
 Project: Hive
  Issue Type: Bug
  Components: Clients
Affects Versions: 0.12.0, 0.13.1
Reporter: Gordon Wang
 Attachments: HIVE-8532.patch


 When executing source hql-file  clause, hive client driver does not catch 
 the return code of this command.
 This behaviour causes an issue when running hive query in Oozie workflow.
 When the source clause is put into a Oozie workflow, Oozie can not get the 
 return code of this command. Thus, Oozie consider the source clause as 
 successful all the time. 
 So, when the source clause fails, the hive query does not abort and the 
 oozie workflow does not abort either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8532) return code of source xxx clause is missing

2014-10-20 Thread Gordon Wang (JIRA)
Gordon Wang created HIVE-8532:
-

 Summary: return code of source xxx clause is missing
 Key: HIVE-8532
 URL: https://issues.apache.org/jira/browse/HIVE-8532
 Project: Hive
  Issue Type: Bug
  Components: Clients
Affects Versions: 0.13.1, 0.12.0
Reporter: Gordon Wang


When executing source hql-file  clause, hive client driver does not catch 
the return code of this command.

This behaviour causes an issue when running hive query in Oozie workflow.
When the source clause is put into a Oozie workflow, Oozie can not get the 
return code of this command. Thus, Oozie consider the source clause as 
successful all the time. 

So, when the source clause fails, the hive query does not abort and the oozie 
workflow does not abort either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-8532) return code of source xxx clause is missing

2014-10-20 Thread Gordon Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-8532?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14177832#comment-14177832
 ] 

Gordon Wang commented on HIVE-8532:
---

The fix is easy, I think a patch would come soon.

 return code of source xxx clause is missing
 -

 Key: HIVE-8532
 URL: https://issues.apache.org/jira/browse/HIVE-8532
 Project: Hive
  Issue Type: Bug
  Components: Clients
Affects Versions: 0.12.0, 0.13.1
Reporter: Gordon Wang

 When executing source hql-file  clause, hive client driver does not catch 
 the return code of this command.
 This behaviour causes an issue when running hive query in Oozie workflow.
 When the source clause is put into a Oozie workflow, Oozie can not get the 
 return code of this command. Thus, Oozie consider the source clause as 
 successful all the time. 
 So, when the source clause fails, the hive query does not abort and the 
 oozie workflow does not abort either.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HIVE-8439) multiple insert into the same table

2014-10-12 Thread Gordon Wang (JIRA)
Gordon Wang created HIVE-8439:
-

 Summary: multiple insert into the same table
 Key: HIVE-8439
 URL: https://issues.apache.org/jira/browse/HIVE-8439
 Project: Hive
  Issue Type: Bug
  Components: Query Processor
Affects Versions: 0.13.0, 0.12.0
Reporter: Gordon Wang


when putting multiple inserts for the same table in one SQL, hive query plan 
analyzer fails to synthesis the right plan.

Here is the reproduce steps.
{noformat}
create table T1(i int, j int);
create table T2(m int) partitioned by (n int);
explain from T1
insert into table T2 partition (n = 1)
  select T1.i where T1.j = 1
insert overwrite table T2 partition (n = 2)
  select T1.i where T1.j = 2
  ;
{noformat}
When there is a insert into clause in the multiple insert part, the insert 
overwrite is considered as insert into.

I dig into the source code, looks like Hive does not support mixing insert 
into and insert overwrite for the same table in multiple insert clauses.

Here is my finding.
1. in semantic analyzer, when processing TOK_INSERT_INTO, the analyzer will put 
the table name into a set which contains all the insert into table names.
2. when generate file sink plan, the analyzer will check if the table name is 
in the set, if in the set, the replace flag is set to false. Here is the code 
snippet.
{noformat}
  // Create the work for moving the table
  // NOTE: specify Dynamic partitions in dest_tab for WriteEntity
  if (!isNonNativeTable) {
ltd = new LoadTableDesc(queryTmpdir, 
ctx.getExternalTmpFileURI(dest_path.toUri()),
table_desc, dpCtx);

ltd.setReplace(!qb.getParseInfo().isInsertIntoTable(dest_tab.getDbName(),
dest_tab.getTableName()));
ltd.setLbCtx(lbCtx);

if (holdDDLTime) {
  LOG.info(this query will not update transient_lastDdlTime!);
  ltd.setHoldDDLTime(true);
}
loadTableWork.add(ltd);
  }
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HIVE-4629) HS2 should support an API to retrieve query logs

2014-03-09 Thread Gordon Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-4629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13925394#comment-13925394
 ] 

Gordon Wang commented on HIVE-4629:
---

What about the status of this jira?
Does anyone try to rebase it to the latest trunk?
I think it is a useful feature especially when doing some testing about hql.

 HS2 should support an API to retrieve query logs
 

 Key: HIVE-4629
 URL: https://issues.apache.org/jira/browse/HIVE-4629
 Project: Hive
  Issue Type: Sub-task
  Components: HiveServer2
Reporter: Shreepadma Venugopalan
Assignee: Shreepadma Venugopalan
 Attachments: HIVE-4629-no_thrift.1.patch, HIVE-4629.1.patch, 
 HIVE-4629.2.patch


 HiveServer2 should support an API to retrieve query logs. This is 
 particularly relevant because HiveServer2 supports async execution but 
 doesn't provide a way to report progress. Providing an API to retrieve query 
 logs will help report progress to the client.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HIVE-6244) hive UT fails on top of Hadoop 2.2.0

2014-01-20 Thread Gordon Wang (JIRA)
Gordon Wang created HIVE-6244:
-

 Summary: hive UT fails on top of Hadoop 2.2.0
 Key: HIVE-6244
 URL: https://issues.apache.org/jira/browse/HIVE-6244
 Project: Hive
  Issue Type: Bug
  Components: Shims
Affects Versions: 0.12.0
Reporter: Gordon Wang


When building hive 0.12.0 on top of hadoop 2.2.0, a lot of UT fails. The error 
messages are like this.
{code}
Job Submission failed with exception 'java.lang.IllegalArgumentException(Wrong 
FS: 
pfile:/home/pivotal/jenkins/workspace/Hive0.12UT_withJDK7/build/ql/test/data/warehouse/src,
 expected: file:///)'
junit.framework.AssertionFailedError: Client Execution failed with error code = 
1
See build/ql/tmp/hive.log, or try ant test ... -Dtest.silent=false to get 
more logs.
at junit.framework.Assert.fail(Assert.java:50)
at 
org.apache.hadoop.hive.cli.TestCliDriver.runTest(TestCliDriver.java:6697)
at 
org.apache.hadoop.hive.cli.TestCliDriver.testCliDriver_join_empty(TestCliDriver.java:3807)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:520)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1060)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:911)
{code}

listLocatedStatus is not implemented in Hive shims. I think this is the root 
cause.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Created] (HIVE-6219) TestProtocolBuffersObjectInspectors fails when upgrading protobuf to 2.5.0

2014-01-16 Thread Gordon Wang (JIRA)
Gordon Wang created HIVE-6219:
-

 Summary: TestProtocolBuffersObjectInspectors fails when upgrading 
protobuf to 2.5.0
 Key: HIVE-6219
 URL: https://issues.apache.org/jira/browse/HIVE-6219
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.0
Reporter: Gordon Wang


As Hadoop 2.2.0 is GA, the protobuf version in Hadoop 2.2.0 is protobuf 2.5.0.
I notice that there is already a jira HIVE-5112 about protobuf upgrading. But 
in this jira,{code} serde/if/test/complexpb.proto {code}is not generated with 
protobuf 2.5.0.
If we generate it with protobuf 2.5.0, then TestProtocolBuffersObjectInspectors 
fails.

The error message is like this
{code}
java.lang.RuntimeException: Internal error: Cannot find ObjectInspector  for 
UNKNOWN
at 
org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory.getPrimitiveJavaObjectInspector(PrimitiveObjectInspectorFactory.java:332)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory.getReflectionObjectInspectorNoCache(ObjectInspectorFactory.java:146)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory.getReflectionObjectInspector(ObjectInspectorFactory.java:69)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory.getReflectionObjectInspectorNoCache(ObjectInspectorFactory.java:192)
at 
org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory.getReflectionObjectInspector(ObjectInspectorFactory.java:69)
at 
org.apache.hadoop.hive.serde2.objectinspector.TestProtocolBuffersObjectInspectors.testProtocolBuffersObjectInspectors(TestProtocolBuffersObjectInspectors.java:40)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at junit.framework.TestCase.runTest(TestCase.java:168)
at junit.framework.TestCase.runBare(TestCase.java:134)
at junit.framework.TestResult$1.protect(TestResult.java:110)
at junit.framework.TestResult.runProtected(TestResult.java:128)
at junit.framework.TestResult.run(TestResult.java:113)
at junit.framework.TestCase.run(TestCase.java:124)
at junit.framework.TestSuite.runTest(TestSuite.java:243)
at junit.framework.TestSuite.run(TestSuite.java:238)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:520)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1060)
at 
org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:911)
{code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-6219) TestProtocolBuffersObjectInspectors fails when upgrading protobuf to 2.5.0

2014-01-16 Thread Gordon Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-6219?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13874389#comment-13874389
 ] 

Gordon Wang commented on HIVE-6219:
---

I dig it a little bit further. And I think it may not be a issue in UT. This 
bug may affect the users who use protobuf 2.5.0 generated code in hive.

The root case of this bug is that the auto generated java code in protobuf 
2.5.0 is totally different from 2.4.x.
If a struct field is of string type, in 2.4.x, The generate code is like
{code}
private java.lang.String aString_ = ;
{code}
But in 2.5.0, the code is like
{code}
private java.lang.Object aString_;
{code}
And then, the inspector of class type Object is UNKNOWN. So the exception is 
thrown.

 TestProtocolBuffersObjectInspectors fails when upgrading protobuf to 2.5.0
 --

 Key: HIVE-6219
 URL: https://issues.apache.org/jira/browse/HIVE-6219
 Project: Hive
  Issue Type: Bug
  Components: Serializers/Deserializers
Affects Versions: 0.13.0
Reporter: Gordon Wang

 As Hadoop 2.2.0 is GA, the protobuf version in Hadoop 2.2.0 is protobuf 2.5.0.
 I notice that there is already a jira HIVE-5112 about protobuf upgrading. But 
 in this jira,{code} serde/if/test/complexpb.proto {code}is not generated with 
 protobuf 2.5.0.
 If we generate it with protobuf 2.5.0, then 
 TestProtocolBuffersObjectInspectors fails.
 The error message is like this
 {code}
 java.lang.RuntimeException: Internal error: Cannot find ObjectInspector  for 
 UNKNOWN
   at 
 org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory.getPrimitiveJavaObjectInspector(PrimitiveObjectInspectorFactory.java:332)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory.getReflectionObjectInspectorNoCache(ObjectInspectorFactory.java:146)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory.getReflectionObjectInspector(ObjectInspectorFactory.java:69)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory.getReflectionObjectInspectorNoCache(ObjectInspectorFactory.java:192)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory.getReflectionObjectInspector(ObjectInspectorFactory.java:69)
   at 
 org.apache.hadoop.hive.serde2.objectinspector.TestProtocolBuffersObjectInspectors.testProtocolBuffersObjectInspectors(TestProtocolBuffersObjectInspectors.java:40)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:601)
   at junit.framework.TestCase.runTest(TestCase.java:168)
   at junit.framework.TestCase.runBare(TestCase.java:134)
   at junit.framework.TestResult$1.protect(TestResult.java:110)
   at junit.framework.TestResult.runProtected(TestResult.java:128)
   at junit.framework.TestResult.run(TestResult.java:113)
   at junit.framework.TestCase.run(TestCase.java:124)
   at junit.framework.TestSuite.runTest(TestSuite.java:243)
   at junit.framework.TestSuite.run(TestSuite.java:238)
   at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.run(JUnitTestRunner.java:520)
   at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.launch(JUnitTestRunner.java:1060)
   at 
 org.apache.tools.ant.taskdefs.optional.junit.JUnitTestRunner.main(JUnitTestRunner.java:911)
 {code}



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


[jira] [Commented] (HIVE-5112) Upgrade protobuf to 2.5 from 2.4

2014-01-16 Thread Gordon Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HIVE-5112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13874390#comment-13874390
 ] 

Gordon Wang commented on HIVE-5112:
---

Hi folks,
I find that if I generate Complex.proto with protobuf 2.5.0. 
TestProtocolBuffersObjectInspectors fails. I think this jira does not upgrade 
protobuf completely.
So I file a jira HIVE-6219 to track this. If this is really a bug, shall we 
reopen this jira ?

 Upgrade protobuf to 2.5 from 2.4
 

 Key: HIVE-5112
 URL: https://issues.apache.org/jira/browse/HIVE-5112
 Project: Hive
  Issue Type: Improvement
Reporter: Brock Noland
Assignee: Owen O'Malley
 Fix For: 0.13.0

 Attachments: HIVE-5112.2.patch, HIVE-5112.D12429.1.patch


 Hadoop and Hbase have both upgraded protobuf. We should as well.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)