Jamie,

The general philosophy is that services should depend very little on the base 
image (some would say no dependency).   There has been an HDFS on the base 
image which we have leveraged while we work on higher priorities.  It was 
always our intent to remove it.  Another example and another enabler to this 
working is there is Java JRE on the base.  It would be a bad idea to get 
addicted to it :)   

That said,  it has always been our intention to support different protocols 
(such as retrieving artifacts from HDFS which  other services (such as Chronos) 
could leverage).    It makes sense that we support s3 retrieval as well.   It 
does mean that we need a pluggable way to hook in solutions to protocols other 
that http.   We have had some discussion around it and have a design idea in 
place.   At this point it is an issue of priority and timing.

ken
> On May 10, 2016, at 1:21 PM, Briant, James <[email protected]> 
> wrote:
> 
> I’m happy to have default IAM role on the box that can read-only fetch from 
> my s3 bucket. s3a gets the credentials from AWS instance metadata. It works.
> 
> If hadoop is gone, does that mean that hfds: URIs don’t work either?
> 
> Are you saying dcos and mesos are diverging? Mesos explicitly supports hdfs 
> and s3.
> 
> In the absence of S3, how do you propose I make large binaries available to 
> my cluster, and only to my cluster, on AWS?
> 
> Jamie
> 
> From: Cody Maloney <[email protected] <mailto:[email protected]>>
> Reply-To: "[email protected] <mailto:[email protected]>" 
> <[email protected] <mailto:[email protected]>>
> Date: Tuesday, May 10, 2016 at 10:58 AM
> To: "[email protected] <mailto:[email protected]>" 
> <[email protected] <mailto:[email protected]>>
> Subject: Re: Enable s3a for fetcher
> 
> The s3 fetcher stuff inside of DC/OS is not supported. The `hadoop` binary 
> has been entirely removed from DC/OS 1.8 already. There have been various 
> proposals to make it so the mesos fetcher is much more pluggable / extensible 
> (https://issues.apache.org/jira/browse/MESOS-2731 
> <https://issues.apache.org/jira/browse/MESOS-2731> for instance). 
> 
> Generally speaking people want a lot of different sorts of fetching, and 
> there are all sorts of questions of how to properly get auth to the various 
> chunks (if you're using s3a:// presumably you need to get credentials there 
> somehow. Otherwise you could just use http://). Need to design / build that 
> into Mesos and DC/OS to be able to use this stuff.
> 
> Cody
> 
> On Tue, May 10, 2016 at 9:55 AM Briant, James <[email protected] 
> <mailto:[email protected]>> wrote:
>> I want to use s3a: urls in fetcher. I’m using dcos 1.7 which has hadoop 2.5 
>> on its agents. This version has the necessary hadoop-aws and aws-sdk:
>> 
>> hadoop--afadb46fe64d0ee7ce23dbe769e44bfb0767a8b9]$ ls 
>> usr/share/hadoop/tools/lib/ | grep aws
>> aws-java-sdk-1.7.4.jar
>> hadoop-aws-2.5.0-cdh5.3.3.jar
>> 
>> What config/scripts do I need to hack to get these guys on the classpath so 
>> that "hadoop fs -copyToLocal” works?
>> 
>> Thanks,
>> Jamie

Reply via email to