[ https://issues.apache.org/jira/browse/SQOOP-3136?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15876757#comment-15876757 ]
Jarek Jarcec Cecho commented on SQOOP-3136: ------------------------------------------- I would recommend sending email to {{dev@sqoop.apache.org}} asking one of the active developers to take a look at the patch [~yalovyyi]. > Sqoop should work well with not default file systems > ---------------------------------------------------- > > Key: SQOOP-3136 > URL: https://issues.apache.org/jira/browse/SQOOP-3136 > Project: Sqoop > Issue Type: Improvement > Components: connectors/hdfs > Affects Versions: 1.4.5 > Reporter: Illya Yalovyy > Assignee: Illya Yalovyy > Attachments: SQOOP-3136.patch > > > Currently Sqoop assumes default file system when it comes to IO operations. > It makes it hard to use other FileSystem implementations as source or > destination. Here is an example: > {code} > sqoop import --connect <JDBC CONNECTION> --table table1 --driver <JDBC > DRIVER> --username root --password **** --delete-target-dir --target-dir > s3a://some-bucket/tmp/sqoop > ... > 17/02/15 19:16:59 ERROR tool.ImportTool: Imported Failed: Wrong FS: > s3a://some-bucket/tmp/sqoop, expected: hdfs://<DNS>:8020 > {code} -- This message was sent by Atlassian JIRA (v6.3.15#6346)