Hi all,

I'm trying to integrate our hadoop infrastructure with another team's
Informix database...

I have the JDBC drivers in place and test queries to most tables are fine
but we're hitting a couple of trip ups...

Some of the tables have grown in size and the primary key has been changed
to from serial to serial8 to cope with this.

When a query is run against such a table a java exception is raised with
the following stack trace:

java.io.IOException: SQLException in nextKeyValue
        at 
org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(DBRecordReader.java:265)
        at 
org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(MapTask.java:458)
        at 
org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:76)
        at 
org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.nextKeyValue(WrappedMapper.java:85)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:139)
        at 
org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:182)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:645)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:325)
        at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
        at org.apache.hadoop.mapred.Child.main(Child.java:262)


Has anyone encountered this or have any ideas how to work around it?


Kind regards,


James

Reply via email to