Just keep in mind that every execution of such command starts a jvm and is, 
generally, heavyweight. Use WebHDFS if you can.

Sent from Outlook Mobile




On Fri, Feb 26, 2016 at 9:13 AM -0800, "Shuai Lin" <[email protected]> 
wrote:










If you don't want to configure hadoop on your mesos slaves, the only workaround 
I see is to write a "hadoop" script and put it in your PATH. It need to support 
the following usage patterns:
- hadoop version- hadoop fs -copyToLocal s3n://path /target/directory/
On Sat, Feb 27, 2016 at 12:31 AM, Aaron Carey <[email protected]> wrote:






I was trying to avoid generating urls for everything as this will complicate 
things a lot.



Is there a straight forward way to get the fetcher to do it directly?





From: haosdent [[email protected]]

Sent: 26 February 2016 16:27

To: user

Subject: Re: Downloading s3 uris






I think still could pass AWSAccessKeyId if it is private? 
http://www.bucketexplorer.com/documentation/amazon-s3--how-to-generate-url-for-amazon-s3-files.html


On Sat, Feb 27, 2016 at 12:25 AM, Abhishek Amralkar 
<[email protected]> wrote:



In that case do we need to keep bucket/files public?



-Abhishek





From: Zhitao Li <[email protected]>

Reply-To: "[email protected]" <[email protected]>

Date: Friday, 26 February 2016 at 8:23 AM

To: "[email protected]" <[email protected]>

Subject: Re: Downloading s3 uris








Haven't directly used s3 download, but I think a workaround (if you don't care 
ACL about the files) is to use
 http url instead.



On Feb 26, 2016, at 8:17 AM, Aaron Carey <[email protected]> wrote:




I'm attempting to fetch files from s3 uris in mesos, but we're not using hdfs 
in our cluster... however I believe I need the client installed.



Is it possible to just have the client running without a full hdfs setup?



I haven't been able to find much information in the docs, could someone point 
me in the right direction?



Thanks!



Aaron



















-- 

Best Regards,

Haosdent Huang














Reply via email to