The sandbox directory structure is a bit deep...  See the "Where is the
sandbox?" section here:
http://mesos.apache.org/documentation/latest/sandbox/


On Fri, Feb 26, 2016 at 10:15 AM, Aaron Carey <[email protected]> wrote:

> A second question for you all..
>
> I'm testing http uri downloads, and all the logs say that the file has
> downloaded (it even shows up in the mesos UI in the sandbox) but I can't
> find the file on disk anywhere. It doesn't appear in the docker container
> I'm running either (shouldn't it be in /mnt/mesos/sandbox?)
>
> Am I missing something here?
>
> Thanks for your help,
>
> Aaron
>
>
> ------------------------------
> *From:* Radoslaw Gruchalski [[email protected]]
> *Sent:* 26 February 2016 17:41
>
> *To:* [email protected]; [email protected]
> *Subject:* Re: Downloading s3 uris
>
> Just keep in mind that every execution of such command starts a jvm and
> is, generally, heavyweight. Use WebHDFS if you can.
>
> Sent from Outlook Mobile <https://aka.ms/qtex0l>
>
>
>
>
> On Fri, Feb 26, 2016 at 9:13 AM -0800, "Shuai Lin" <[email protected]
> > wrote:
>
> If you don't want to configure hadoop on your mesos slaves, the only
>> workaround I see is to write a "hadoop" script and put it in your PATH. It
>> need to support the following usage patterns:
>>
>> - hadoop version
>> - hadoop fs -copyToLocal s3n://path /target/directory/
>>
>> On Sat, Feb 27, 2016 at 12:31 AM, Aaron Carey <[email protected]> wrote:
>>
>>> I was trying to avoid generating urls for everything as this will
>>> complicate things a lot.
>>>
>>> Is there a straight forward way to get the fetcher to do it directly?
>>>
>>> ------------------------------
>>> *From:* haosdent [[email protected]]
>>> *Sent:* 26 February 2016 16:27
>>> *To:* user
>>> *Subject:* Re: Downloading s3 uris
>>>
>>> I think still could pass AWSAccessKeyId if it is private?
>>> http://www.bucketexplorer.com/documentation/amazon-s3--how-to-generate-url-for-amazon-s3-files.html
>>>
>>> On Sat, Feb 27, 2016 at 12:25 AM, Abhishek Amralkar <
>>> [email protected]> wrote:
>>>
>>>> In that case do we need to keep bucket/files public?
>>>>
>>>> -Abhishek
>>>>
>>>> From: Zhitao Li <[email protected]>
>>>> Reply-To: "[email protected]" <[email protected]>
>>>> Date: Friday, 26 February 2016 at 8:23 AM
>>>> To: "[email protected]" <[email protected]>
>>>> Subject: Re: Downloading s3 uris
>>>>
>>>> Haven't directly used s3 download, but I think a workaround (if you
>>>> don't care ACL about the files) is to use http
>>>> <http://stackoverflow.com/questions/18239567/how-can-i-download-a-file-from-an-s3-bucket-with-wget>
>>>>  url
>>>> instead.
>>>>
>>>> On Feb 26, 2016, at 8:17 AM, Aaron Carey <[email protected]> wrote:
>>>>
>>>> I'm attempting to fetch files from s3 uris in mesos, but we're not
>>>> using hdfs in our cluster... however I believe I need the client installed.
>>>>
>>>> Is it possible to just have the client running without a full hdfs
>>>> setup?
>>>>
>>>> I haven't been able to find much information in the docs, could someone
>>>> point me in the right direction?
>>>>
>>>> Thanks!
>>>>
>>>> Aaron
>>>>
>>>>
>>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Haosdent Huang
>>>
>>
>>

Reply via email to