[
https://issues.apache.org/jira/browse/SQOOP-869?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13575269#comment-13575269
]
Jarek Jarcec Cecho commented on SQOOP-869:
------------------------------------------
I believe that this is not a bug, but known limitation. Hive import is not
compatible with {{--as-avrodatafile}} and {{--as-sequencefile}} arguments as is
described in our [user
guide|http://sqoop.apache.org/docs/1.4.2/SqoopUserGuide.html#_importing_data_into_hive].
However it would be great if Sqoop would rather thrown exception at the
beginning informing about incompatible arguments rather than throwing random
exceptions later.
> Sqoop can not append data to hive in SequenceFiles format
> ---------------------------------------------------------
>
> Key: SQOOP-869
> URL: https://issues.apache.org/jira/browse/SQOOP-869
> Project: Sqoop
> Issue Type: Bug
> Components: hive-integration
> Affects Versions: 1.4.2
> Reporter: zhangguancheng
>
> To reproduce it, do the following:
> 1) sqoop import --hive-import --connect
> jdbc:oracle:thin:@###.###.###.###:###:orcl --username ### --password ###
> --target-dir /###/### --hive-home /###/### --hive-table ### --as-sequencefile
> --query "select # from "###"."###" where \$CONDITIONS" --create-hive-table
> --class-name ### --outdir /###/### --bindir /###/### --map-column-hive
> ###=STRING,###=BIGINT,###=BIGINT
> 2) sqoop import --hive-import --connect
> jdbc:oracle:thin:@###.###.###.###:###:orcl --username ### --password ###
> --target-dir /###/### --hive-home /###/### --hive-table ### --as-sequencefile
> --query "select # from "TEST1"."BAI" where \$CONDITIONS" -append
> --class-name ### --outdir /###/### --bindir /###/### --map-column-hive
> ###=STRING,###=BIGINT,###=BIGINT
> And the output of step 2) will be something like:
> {noformat}
> 12/05/04 23:47:07 INFO hive.HiveImport: OK
> 12/05/04 23:47:07 INFO hive.HiveImport: Time taken: 3.996 seconds
> 12/05/04 23:47:08 INFO hive.HiveImport: Loading data to table default.###
> 12/05/04 23:47:09 INFO hive.HiveImport: Failed with exception
> java.lang.RuntimeException: java.io.IOException: WritableName can't load
> class: ***
> 12/05/04 23:47:09 INFO hive.HiveImport: FAILED: Execution Error, return code
> 1 from org.apache.hadoop.hive.ql.exec.MoveTask
> 12/05/04 23:47:09 ERROR tool.ImportTool: Encountered IOException running
> import job: java.io.IOException: Hive exited with status 9
> at
> org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:375)
> at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:315)
> at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:227)
> at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:415)
> at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:476)
> at org.apache.sqoop.Sqoop.run(Sqoop.java:145)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
> at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:181)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:220)
> at org.apache.sqoop.Sqoop.runTool(Sqoop.java:229)
> at org.apache.sqoop.Sqoop.main(Sqoop.java:238)
> at com.cloudera.sqoop.Sqoop.main(Sqoop.java:57)
> {noformat}
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira