[ 
https://issues.apache.org/jira/browse/SQOOP-862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Robson updated SQOOP-862:
-------------------------------

    Attachment: SQOOP-862.patch
    
> Hbase import fails if there is a row where all columns are null
> ---------------------------------------------------------------
>
>                 Key: SQOOP-862
>                 URL: https://issues.apache.org/jira/browse/SQOOP-862
>             Project: Sqoop
>          Issue Type: Bug
>            Reporter: David Robson
>            Assignee: David Robson
>         Attachments: SQOOP-862.patch
>
>
> If you try to import a table where any of the rows contain all null values 
> (except for the primary key), the import fails. For example create the 
> following table in Oracle:
> CREATE TABLE employee(id number primary key, test_number number);
> INSERT INTO employee values(1, 123);
> INSERT INTO employee values(2, null);
> COMMIT;
> Then run an import:
> sqoop import --connect jdbc:oracle:thin:@//HOSTNAME/SERVICE --username 
> USERNAME --table EMPLOYEE --password PASSWORD --hbase-table EMPLOYEE 
> --column-family tst --hbase-create-table
> The Import fails with:
>  java.lang.IllegalArgumentException: No columns to insert
>       at 
> org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:400)
> Caused by: java.lang.IllegalArgumentException: No columns to insert
>       at org.apache.hadoop.hbase.client.HTable.validatePut(HTable.java:950)
>       at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:766)
>       at org.apache.hadoop.hbase.client.HTable.put(HTable.java:752)
>       at 
> org.apache.sqoop.hbase.HBasePutProcessor.accept(HBasePutProcessor.java:127)
>       at 
> org.apache.sqoop.mapreduce.DelegatingOutputFormat$DelegatingRecordWriter.write(DelegatingOutputFormat.java:128)
>       at 
> org.apache.sqoop.mapreduce.DelegatingOutputFormat$DelegatingRecordWriter.write(DelegatingOutputFormat.java:1)
>       at 
> org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:598)
>       at 
> org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
>       at 
> org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>       at 
> org.apache.sqoop.mapreduce.HBaseImportMapper.map(HBaseImportMapper.java:38)
>       at 
> org.apache.sqoop.mapreduce.HBaseImportMapper.map(HBaseImportMapper.java:1)
>       at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
>       at 
> org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
>       at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:725)
>       at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
>       at 
> org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:232)
>       at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>       at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>       at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>       at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>       at java.lang.Thread.run(Thread.java:662)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to