Import fails with Unknown SQL datatype exception
------------------------------------------------

                 Key: SQOOP-359
                 URL: https://issues.apache.org/jira/browse/SQOOP-359
             Project: Sqoop
          Issue Type: Bug
          Components: connectors/generic
    Affects Versions: 1.3.0
            Reporter: Arvind Prabhakar


To reproduce this, run an import using a query with number of mappers set to 1 
and no boundary query specified. For example:

{code}
$ sqoop import --connect jdbc:mysql://localhost/testdb --username test 
--password **** \
    --query 'SELECT TDX.A, TDX.B FROM TDX WHERE $CONDITIONS' \
    --target-dir /user/arvind/MYSQL/TDX1 -m 1
{code}

This import will fail as follows:

{code}
11/10/06 15:37:59 INFO orm.CompilationManager: Writing jar file: 
/tmp/sqoop-arvind/compile/190f858175a9f99756e503727c931450/QueryResult.jar
11/10/06 15:37:59 INFO mapreduce.ImportJobBase: Beginning query import.
11/10/06 15:38:00 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT 
MIN(null), MAX(null) FROM (SELECT TDX.A, TDX.B FROM TDX WHERE  (1 = 1) ) AS t1
11/10/06 15:38:00 INFO mapred.JobClient: Cleaning up the staging area 
hdfs://localhost/opt/site/cdh3u1/hadoop/data/tmp/mapred/staging/arvind/.staging/job_201110061528_0004
11/10/06 15:38:00 ERROR tool.ImportTool: Encountered IOException running import 
job: java.io.IOException: Unknown SQL data type: -3
        at 
com.cloudera.sqoop.mapreduce.db.DataDrivenDBInputFormat.getSplits(DataDrivenDBInputFormat.java:211)
        at org.apache.hadoop.mapred.JobClient.writeNewSplits(JobClient.java:944)
        at org.apache.hadoop.mapred.JobClient.writeSplits(JobClient.java:961)
        at org.apache.hadoop.mapred.JobClient.access$500(JobClient.java:170)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:880)
        at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:833)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:396)
        at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1127)
        at 
org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:833)
        at org.apache.hadoop.mapreduce.Job.submit(Job.java:476)
        at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:506)
        at 
com.cloudera.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:123)
        at 
com.cloudera.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:183)
        at 
com.cloudera.sqoop.manager.SqlManager.importQuery(SqlManager.java:450)
        at com.cloudera.sqoop.tool.ImportTool.importTable(ImportTool.java:384)
        at com.cloudera.sqoop.tool.ImportTool.run(ImportTool.java:455)
        at com.cloudera.sqoop.Sqoop.run(Sqoop.java:146)
        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
        at com.cloudera.sqoop.Sqoop.runSqoop(Sqoop.java:182)
        at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:221)
        at com.cloudera.sqoop.Sqoop.runTool(Sqoop.java:230)
        at com.cloudera.sqoop.Sqoop.main(Sqoop.java:239)

{code}

The problem seems to be the bounding value query that is using a {{null}} 
column name for figuring out the datatype.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to