Hey Thomas -

Thanks! That's exactly the issue and that pointer was super helpful in
figuring this out. I upgraded the jars you discovered needed updating:

    datanucleus-core-2.2.5.jar
    datanucleus-rdbms-2.2.4.jar

Interestingly, I had to make one more change than you did due to a query
error (stack below because its huge). I traced this back to:

https://github.com/apache/hive/blob/trunk/metastore/src/model/package.jdo#L49

Notice how the field is "FCOMMENT" - notice the F prefix. In our database
the column is named "COMMENT" - without the F. By changing that line to
COMMENT so it matches our table I can now run range queries!

I don't understand what's going on here yet though, since the 2.0.3
datanucleus jars do not have the problem.


Stack trace:

12/03/14 01:00:17 ERROR server.TThreadPoolServer: Error occurred during
processing of message.

Iteration request failed : SELECT
`A0`.`FCOMMENT`,`A0`.`COLUMN_NAME`,`A0`.`TYPE_NAME`,`A0`.`INTEGER_IDX` AS
NUCORDER0 FROM `COLUMNS_V2` `A0` WHERE `A0`.`CD_ID` = ? AND
`A0`.`INTEGER_IDX` >= 0 ORDER BY NUCORDER0

org.datanucleus.exceptions.NucleusDataStoreException: Iteration request
failed : SELECT
`A0`.`FCOMMENT`,`A0`.`COLUMN_NAME`,`A0`.`TYPE_NAME`,`A0`.`INTEGER_IDX` AS
NUCORDER0 FROM `COLUMNS_V2` `A0` WHERE `A0`.`CD_ID` = ? AND
`A0`.`INTEGER_IDX` >= 0 ORDER BY NUCORDER0

 at
org.datanucleus.store.rdbms.scostore.RDBMSJoinListStore.listIterator(RDBMSJoinListStore.java:189)

 at
org.datanucleus.store.mapped.scostore.AbstractListStore.listIterator(AbstractListStore.java:84)

 at
org.datanucleus.store.mapped.scostore.AbstractListStore.iterator(AbstractListStore.java:74)

 at org.datanucleus.store.types.sco.backed.List.loadFromStore(List.java:240)

 at org.datanucleus.store.types.sco.backed.List.iterator(List.java:493)

 at
org.apache.hadoop.hive.metastore.ObjectStore.convertToFieldSchemas(ObjectStore.java:926)

 at
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1019)

 at
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1029)

 at
org.apache.hadoop.hive.metastore.ObjectStore.convertToTable(ObjectStore.java:864)

 at
org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:735)

 at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$17.run(HiveMetaStore.java:1241)

 at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$17.run(HiveMetaStore.java:1238)

 at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:379)

 at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1238)

 at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.getResult(ThriftHiveMetastore.java:5017)

 at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.getResult(ThriftHiveMetastore.java:5005)

 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)

 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)

 at
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:176)

 at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

 at java.lang.Thread.run(Thread.java:662)

Caused by: com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException:
Unknown column 'A0.FCOMMENT' in 'field list'

 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

 at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)

 at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)

 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)

 at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)

 at com.mysql.jdbc.Util.getInstance(Util.java:386)

 at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052)

 at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3609)

 at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3541)

 at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2002)

 at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2163)

 at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2624)

 at
com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2127)

 at
com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2293)

 at
org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)

 at
org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)

 at
org.datanucleus.store.rdbms.SQLController.executeStatementQuery(SQLController.java:463)

 at
org.datanucleus.store.rdbms.scostore.RDBMSJoinListStore.listIterator(RDBMSJoinListStore.java:152)

 ... 21 more

Nested Throwables StackTrace:

com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column
'A0.FCOMMENT' in 'field list'

 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

 at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)

 at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)

 at java.lang.reflect.Constructor.newInstance(Constructor.java:513)

 at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)

 at com.mysql.jdbc.Util.getInstance(Util.java:386)

 at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1052)

 at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3609)

 at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3541)

 at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2002)

 at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2163)

 at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2624)

 at
com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:2127)

 at
com.mysql.jdbc.PreparedStatement.executeQuery(PreparedStatement.java:2293)

 at
org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)

 at
org.apache.commons.dbcp.DelegatingPreparedStatement.executeQuery(DelegatingPreparedStatement.java:96)

 at
org.datanucleus.store.rdbms.SQLController.executeStatementQuery(SQLController.java:463)

 at
org.datanucleus.store.rdbms.scostore.RDBMSJoinListStore.listIterator(RDBMSJoinListStore.java:152)

 at
org.datanucleus.store.mapped.scostore.AbstractListStore.listIterator(AbstractListStore.java:84)

 at
org.datanucleus.store.mapped.scostore.AbstractListStore.iterator(AbstractListStore.java:74)

 at org.datanucleus.store.types.sco.backed.List.loadFromStore(List.java:240)

 at org.datanucleus.store.types.sco.backed.List.iterator(List.java:493)

 at
org.apache.hadoop.hive.metastore.ObjectStore.convertToFieldSchemas(ObjectStore.java:926)

 at
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1019)

 at
org.apache.hadoop.hive.metastore.ObjectStore.convertToStorageDescriptor(ObjectStore.java:1029)

 at
org.apache.hadoop.hive.metastore.ObjectStore.convertToTable(ObjectStore.java:864)

 at
org.apache.hadoop.hive.metastore.ObjectStore.getTable(ObjectStore.java:735)

 at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$17.run(HiveMetaStore.java:1241)

 at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler$17.run(HiveMetaStore.java:1238)

 at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.executeWithRetry(HiveMetaStore.java:379)

 at
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1238)

 at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.getResult(ThriftHiveMetastore.java:5017)

 at
org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_table.getResult(ThriftHiveMetastore.java:5005)

 at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:32)

 at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)

 at
org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:176)

 at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)

 at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)

 at java.lang.Thread.run(Thread.java:662)
--travis


On Tue, Mar 13, 2012 at 1:38 PM, Thomas Weise <[email protected]> wrote:

> Isn't this the JDO issue with mysql?
>
> https://issues.apache.org/jira/browse/HCATALOG-209
>
>
> On 3/13/12 12:24 PM, "Travis Crawford" <[email protected]> wrote:
>
> > Here's the stack trace (below). What's interesting is I just verified a
> job
> > with this filter statement works:
> >
> > f = FILTER data BY part_dt == '20120313T000000Z';
> >
> > Then the stack trace below happens when changing to:
> >
> > f = FILTER data BY part_dt >= '20120313T000000Z';
> >
> > This suggests its not a thrift version issue, and I've double-checked and
> > everything appears to be thrift 0.7.0 (looked inside jars for potential
> > packaging-related conflicts too).
> >
> > Additionally, here's a similar show partitions statement in the Hive
> shell:
> >
> > hive> show partitions foo partition(part_dt='20120313T110000Z');
> > OK
> > part_dt=20120313T110000Z
> > Time taken: 0.257 seconds
> > hive> show partitions foo partition(part_dt>='20120313T110000Z');
> > FAILED: Parse Error: line 1:39 mismatched input '>=' expecting ) near
> > 'part_dt' in show statement
> >
> > hive> show partitions foo partition(part_dt>'20120313T110000Z');
> > FAILED: Parse Error: line 1:39 mismatched input '>' expecting ) near
> > 'part_dt' in show statement
> >
> > Would you expect this sort of range query to work?
> >
> >
> > Pig Stack Trace
> > ---------------
> > ERROR 2017: Internal error creating job configuration.
> >
> > org.apache.pig.impl.logicalLayer.FrontendException: ERROR 1066: Unable to
> > open iterator for alias l
> >         at org.apache.pig.PigServer.openIterator(PigServer.java:857)
> >         at
> > org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:655)
> >         at
> >
> org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.ja
> > va:303)
> >         at
> >
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:188)
> >         at
> >
> org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:164)
> >         at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
> >         at org.apache.pig.Main.run(Main.java:561)
> >         at org.apache.pig.Main.main(Main.java:111)
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >         at
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >         at
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
> > ava:25)
> >         at java.lang.reflect.Method.invoke(Method.java:597)
> >         at org.apache.hadoop.util.RunJar.main(RunJar.java:186)
> > Caused by: org.apache.pig.PigException: ERROR 1002: Unable to store
> alias l
> >         at org.apache.pig.PigServer.storeEx(PigServer.java:956)
> >         at org.apache.pig.PigServer.store(PigServer.java:919)
> >         at org.apache.pig.PigServer.openIterator(PigServer.java:832)
> >         ... 12 more
> > Caused by:
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobCreationExcept
> > ion:
> > ERROR 2017: Internal error creating job configuration.
> >         at
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompile
> > r.getJob(JobControlCompiler.java:736)
> >         at
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompile
> > r.compile(JobControlCompiler.java:261)
> >         at
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.MapReduceLauncher
> > .launchPig(MapReduceLauncher.java:151)
> >         at org.apache.pig.PigServer.launchPlan(PigServer.java:1270)
> >         at
> > org.apache.pig.PigServer.executeCompiledLogicalPlan(PigServer.java:1255)
> >         at org.apache.pig.PigServer.storeEx(PigServer.java:952)
> >         ... 14 more
> > Caused by: java.io.IOException:
> > org.apache.thrift.transport.TTransportException
> >         at
> >
>
> org.apache.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:42>
> )
> >         at
> > org.apache.hcatalog.pig.HCatLoader.setLocation(HCatLoader.java:90)
> >         at
> >
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.JobControlCompile
> > r.getJob(JobControlCompiler.java:385)
> >         ... 19 more
> > Caused by: org.apache.thrift.transport.TTransportException
> >         at
> >
> org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:13
> > 2)
> >         at
> > org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
> >         at
> >
> org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:1
> > 29)
> >         at
> >
> org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
> >         at
> > org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
> >         at
> >
> org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:378)
> >         at
> >
> org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:297)
> >         at
> >
> org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.ja
> > va:204)
> >         at
> > org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
> >         at
> >
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_get_parti
> > tions_by_filter(ThriftHiveMetastore.java:1399)
> >         at
> >
> org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_partitions
> > _by_filter(ThriftHiveMetastore.java:1383)
> >         at
> >
> org.apache.hadoop.hive.metastore.HiveMetaStoreClient.listPartitionsByFilter(Hi
> > veMetaStoreClient.java:680)
> >         at
> >
> org.apache.hcatalog.mapreduce.InitializeInput.getSerializedHcatKeyJobInfo(Init
> > ializeInput.java:100)
> >         at
> >
>
> org.apache.hcatalog.mapreduce.InitializeInput.setInput(InitializeInput.java:76>
> )
> >         at
> >
>
> org.apache.hcatalog.mapreduce.HCatInputFormat.setInput(HCatInputFormat.java:40>
> )
> >         ... 21 more
> >
> ==============================================================================
> > ==
> >
> >
> > --Travis
> >
> >
> >
> > On Tue, Mar 13, 2012 at 11:28 AM, Ashutosh Chauhan
> > <[email protected]>wrote:
> >
> >> Hey Travis,
> >>
> >> You should never get an exception. Difference in behavior will be
> whether
> >> filter got pushed from Pig into HCat or not, but that should not affect
> >> correctness.
> >> Can you paste the stack trace that you are getting?
> >>
> >> Ashutosh
> >>
> >> On Tue, Mar 13, 2012 at 11:14, Travis Crawford <
> [email protected]
> >>> wrote:
> >>
> >>> Hey HCat gurus -
> >>>
> >>> I'm having an issue getting range filters working and am curious if
> what
> >>> I'm trying to do makes sense. When filtering for explicit partitions
> >> (even
> >>> multiple explicit partitions) things work as expected. However, when
> >>> requesting a range it fails with a TTransportException.
> >>>
> >>> data = LOAD 'db.table' USING org.apache.hcatalog.pig.HCatLoader();
> >>>
> >>> -- filter statements that work
> >>> f = FILTER data BY part_dt == '20120313T000000Z';
> >>> f = FILTER data BY part_dt == '20120313T000000Z'
> >>>                OR part_dt == '20120313T010000Z';
> >>>
> >>> -- filter statements that do not work
> >>> f = FILTER data BY part_dt >= '20120313T000000Z';
> >>> f = FILTER data BY part_dt >= '20120313T000000Z'
> >>>               AND part_dt < '20120313T010000Z';
> >>>
> >>> When things are working correctly would you expect all four of these
> >> filter
> >>> statements to be valid?
> >>>
> >>> Thanks!
> >>> Travis
> >>>
> >>
>
>

Reply via email to