[
https://issues.apache.org/jira/browse/HADOOP-14142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Steve Loughran resolved HADOOP-14142.
-------------------------------------
Resolution: Invalid
And closing as invalid. It's your home directory, which is the default unless
you put a trailing "/" on the path.
As stated before, If you are trying to show your S3 implementation works with
hadoop, run the hadoop-aws test suite first. For spark, [try these
tests|https://github.com/steveloughran/spark-cloud-examples]
Either way, immediately panicing with critical bug reports isn't going to fix
the problem. You've got partway there with the logging, but take advantage of
the code: step through it to see what's happening. It's what we end up doing
> S3A - Adding unexpected prefix
> ------------------------------
>
> Key: HADOOP-14142
> URL: https://issues.apache.org/jira/browse/HADOOP-14142
> Project: Hadoop Common
> Issue Type: Bug
> Components: fs/s3
> Affects Versions: 2.7.3
> Reporter: Vishnu Vardhan
> Priority: Minor
>
> Hi:
> S3A seems to prefix unexpected prefix to my s3 path
> Specifically, in the debug log below the following line is unexpected
> > GET /myBkt8/?max-keys=1&prefix=user%2Fvardhan%2F&delimiter=%2F HTTP/1.1
> It is not clear where the "prefix" is coming from and why.
> I executed the following commands
> sc.setLogLevel("DEBUG")
> sc.hadoopConfiguration.set("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem")
> sc.hadoopConfiguration.set("fs.s3a.endpoint","webscaledemo.netapp.com:8082")
> sc.hadoopConfiguration.set("fs.s3a.access.key","")
> sc.hadoopConfiguration.set("fs.s3a.secret.key","")
> sc.hadoopConfiguration.set("fs.s3a.path.style.access","false")
> val s3Rdd = sc.textFile("s3a://myBkt98")
> s3Rdd.count()
> ----
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]