Subject: Re: Webhdfs and S3
EXTERNAL
You can start 2 httpfs servers (or even more), and let one set fs.defaultFS to
s3a://, and the other set to hdfs.
Will that work for you? Or is this not what you need?
On Wed, May 22, 2019 at 3:40 PM Joseph Henry
mailto:joseph.he...@sas.com>> wrote:
I t
ive
> hdfs as well as S3 in the same cluster. If we change fs.defaultFS then I
> would not be able to access the HDFS storage.
>
>
>
> *From:* Wei-Chiu Chuang
> *Sent:* Wednesday, May 22, 2019 9:36 AM
> *To:* Joseph Henry
> *Cc:* user@hadoop.apache.org
> *Subject:* Re:
@hadoop.apache.org
Subject: Re: Webhdfs and S3
EXTERNAL
I've never tried, but it seems possible to start a Httpfs server with
fs.defaultFS = s3a://your-bucket
Httpfs server speaks WebHDFS protocol so your webhdfs client can use webhdfs.
And then for each webhdfs request, httpfs server translates
port accessing S3 buckets from hdfs. We
> can do this with the Java API using the s3a:// scheme, but also need a way
> to access the same files in S3 via the HDFS REST API.
>
>
>
> Is there a way to access the data stored in S3 via WEBHDFS?
>
>
>
> Thanks,
>
> Joseph Henry.
>
>
>
REST API.
Is there a way to access the data stored in S3 via WEBHDFS?
Thanks,
Joseph Henry.