Jurgen Van Gael created SQOOP-891:
-------------------------------------
Summary: Sqoop export from S3 to MySQL fails when S3 is not
default filesystem.
Key: SQOOP-891
URL: https://issues.apache.org/jira/browse/SQOOP-891
Project: Sqoop
Issue Type: Bug
Components: tools
Affects Versions: 1.4.1-incubating
Environment: CDH4.1.3 on Amazon EC2
Reporter: Jurgen Van Gael
I recently tried to use sqoop to export a Hive table that lives on S3 into my
MySQL server (sqoop export --options-file config.txt --table _universe
--export-dir s3n://key:secret@mybucket/universe --input-fields-terminated-by
'\0001' -m 1 --input-null-string '\\N' --input-null-non-string '\\N'^C). My
Sqoop runs on a CDH4 cluster on EC2. I was getting errors such as the following:
13/02/11 17:37:15 ERROR security.UserGroupInformation:
PriviledgedActionException as:XXX (auth:SIMPLE)
cause:java.io.FileNotFoundException: File does not exist:
/universe/000000_0.snappy
13/02/11 17:37:15 ERROR tool.ExportTool: Encountered IOException running export
job: java.io.FileNotFoundException: File does not exist:
/universe/000000_0.snappy
Since the files do exist on S3, I was reminded of getting the same errors when
running Hive queries against this table. The reason Hive was failing back then
is because of a bug in CombineFileInputFormat when using it against a
non-default file system. These issues have since been fixed in Hadoop:
https://issues.apache.org/jira/browse/MAPREDUCE-1806
https://issues.apache.org/jira/browse/MAPREDUCE-2704
I believe Sqoop uses a version of CombineFileInputFormat but as far as I can
tell from the latest sources on GIT hasn't incorporated the above fixes.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira