Here is my app def:

https://gist.github.com/skinney6/a63ff7f0f8311faaabaf0399702a403f



________________________________
Scott Kinney | DevOps
stem <http://www.stem.com/>   |   m  510.282.1299
100 Rollins Road, Millbrae, California 94030

This e-mail and/or any attachments contain Stem, Inc. confidential and 
proprietary information and material for the sole use of the intended 
recipient(s). Any review, use or distribution that has not been expressly 
authorized by Stem, Inc. is strictly prohibited. If you are not the intended 
recipient, please contact the sender and delete all copies. Thank you.
________________________________
From: haosdent <[email protected]>
Sent: Wednesday, May 25, 2016 8:42 PM
To: user
Subject: Re: Hadoop install location to use s3 uri

It looks like could not real the HADOOP_HOME correctly. Otherwise the error 
message would be "/path/to/unpacked/hadoop/bin/hadoop version 2>&1". May you 
show your Marathon application definition?

On Thu, May 26, 2016 at 11:31 AM, Scott Kinney 
<[email protected]<mailto:[email protected]>> wrote:

I want to use the s3 uri but i guess i need hadoop on the slave. I've unpacked 
the hadoop tar ball and added 'HADOOP_HOME=/path/to/unpacked/hadoop' to 
marathons app definition's environment but mesos still says it can't find 
hadoop.


Failed to fetch 's3n://bucket/docker.tar.gz': Failed to create HDFS client: 
Failed to execute 'hadoop version 2>&1'; the command was either not found or 
exited with a non-zero exit status: 127


Also, is the s3 uri correct? s3n://bucketname/keyname ?


Thanks!

________________________________
Scott Kinney | DevOps
stem <http://www.stem.com/>   |   m  510.282.1299<tel:510.282.1299>
100 Rollins Road, Millbrae, California 94030

This e-mail and/or any attachments contain Stem, Inc. confidential and 
proprietary information and material for the sole use of the intended 
recipient(s). Any review, use or distribution that has not been expressly 
authorized by Stem, Inc. is strictly prohibited. If you are not the intended 
recipient, please contact the sender and delete all copies. Thank you.



--
Best Regards,
Haosdent Huang

Reply via email to