Re: Fwd: Unable to run spark examples on mesos 1.0

2016-08-05 Thread max square
Thanks Stephen. That did the job for me. After adding JAVA_HOME in
hadoop-layout.sh, I was able to run the spark-job successfully.
@mgummelt - I did not set the executor_environment_variables. However, I am
now able to see JAVA_HOME when I print out the 'env' in the driver.


On Fri, Aug 5, 2016 at 1:53 PM, mgumm...@mesosphere.io <
mgumm...@mesosphere.io> wrote:

> What is your --executor-environment-variables set to?
> http://mesos.apache.org/documentation/latest/configuration/
>
> Can you print out your `env` in the driver to verify it has the expected
> JAVA_HOME
>
> On 2016-08-04 12:28 (-0700), max square  wrote:
> > Hey guys ,
> > I was trying out spark 2.0 examples to run on mesos+hadoop cluster but it
> > keep failing with the following error message:-
> >
> > I0803 19:46:53.848696 12494 fetcher.cpp:498] Fetcher Info:
> > > {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/
> 587226cc-bece-422a-bb93-e3ef49075642-S1\/root","items"
> :[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"
> hdfs:\/\/testcluster\/spark-examples_2.11-2.0.0.jar"}},{"
> action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\
> /\/testcluster\/spark-2.0.0-bin-hdfs-2.6.0-cdh5.7.1.tgz"}}
> ],"sandbox_directory":"\/vol\/mesos\/data\/slaves\/587226cc-
> bece-422a-bb93-e3ef49075642-S1\/frameworks\/587226cc-bece-
> 422a-bb93-e3ef49075642-0017\/executors\/driver-20160803194649-0001\/runs\/
> b1e9a92e-f004-4cdc-b936-52b32593d39f","user":"root"}
> >
> > I0803 19:46:53.850719 12494 fetcher.cpp:409] Fetching URI
> > > 'hdfs://testcluster/spark-examples_2.11-2.0.0.jar'
> >
> > I0803 19:46:53.850731 12494 fetcher.cpp:250] Fetching directly into the
> > > sandbox directory
> >
> > I0803 19:46:53.850746 12494 fetcher.cpp:187] Fetching URI
> > > 'hdfs://testcluster/spark-examples_2.11-2.0.0.jar'
> > > E0803 19:46:53.860776 12494 shell.hpp:106] Command
> > > '/usr/lib/hadoop/bin/hadoop version 2>&1' failed; this is the output:
> > > Error: JAVA_HOME is not set and could not be found.
> > > Failed to fetch 'hdfs://testcluster/spark-examples_2.11-2.0.0.jar':
> Failed
> > > to create HDFS client: Failed to execute '/usr/lib/hadoop/bin/hadoop
> > > version 2>&1'; the command was either not found or exited with a
> non-zero
> > > exit status: 1
> > > Failed to synchronize with agent (it's probably exited)
> >
> >
> > To start out, I tried out the hadoop command which was giving the error
> on
> > the agents and was able to replicate the error. So basically, running
> "sudo
> > -u root /usr/lib/hadoop/bin/hadoop version 2>&1" gave me the same
> JAVA_HOME
> > not set error. After I fixed that and restarted the agents, running the
> > spark example still gave me the same error.
> >
> > I ran the same examples on mesos 0.28.2, and it ran fine.
> >
> > Any help regarding this would be appreciated.
> >
> > *Additional Info :-*
> > mesos version - 1.0.0
> > hadoop version - 2.6.0-cdh5.7.2
> > spark version - 2.0.0
> >
> > Command used to run spark example - ./bin/spark-submit --class
> > org.apache.spark.examples.SparkPi --master mesos://:7077
> > --deploy-mode cluster --executor-memory 2G --total-executor-cores 4
> > hdfs://testcluster/spark-examples_2.11-2.0.0.jar 100
> >
>


Re: Fwd: Unable to run spark examples on mesos 1.0

2016-08-05 Thread mgumm...@mesosphere.io
What is your --executor-environment-variables set to? 
http://mesos.apache.org/documentation/latest/configuration/

Can you print out your `env` in the driver to verify it has the expected 
JAVA_HOME

On 2016-08-04 12:28 (-0700), max square  wrote: 
> Hey guys ,
> I was trying out spark 2.0 examples to run on mesos+hadoop cluster but it
> keep failing with the following error message:-
> 
> I0803 19:46:53.848696 12494 fetcher.cpp:498] Fetcher Info:
> > {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/587226cc-bece-422a-bb93-e3ef49075642-S1\/root","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\/\/testcluster\/spark-examples_2.11-2.0.0.jar"}},{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\/\/testcluster\/spark-2.0.0-bin-hdfs-2.6.0-cdh5.7.1.tgz"}}],"sandbox_directory":"\/vol\/mesos\/data\/slaves\/587226cc-bece-422a-bb93-e3ef49075642-S1\/frameworks\/587226cc-bece-422a-bb93-e3ef49075642-0017\/executors\/driver-20160803194649-0001\/runs\/b1e9a92e-f004-4cdc-b936-52b32593d39f","user":"root"}
> 
> I0803 19:46:53.850719 12494 fetcher.cpp:409] Fetching URI
> > 'hdfs://testcluster/spark-examples_2.11-2.0.0.jar'
> 
> I0803 19:46:53.850731 12494 fetcher.cpp:250] Fetching directly into the
> > sandbox directory
> 
> I0803 19:46:53.850746 12494 fetcher.cpp:187] Fetching URI
> > 'hdfs://testcluster/spark-examples_2.11-2.0.0.jar'
> > E0803 19:46:53.860776 12494 shell.hpp:106] Command
> > '/usr/lib/hadoop/bin/hadoop version 2>&1' failed; this is the output:
> > Error: JAVA_HOME is not set and could not be found.
> > Failed to fetch 'hdfs://testcluster/spark-examples_2.11-2.0.0.jar': Failed
> > to create HDFS client: Failed to execute '/usr/lib/hadoop/bin/hadoop
> > version 2>&1'; the command was either not found or exited with a non-zero
> > exit status: 1
> > Failed to synchronize with agent (it's probably exited)
> 
> 
> To start out, I tried out the hadoop command which was giving the error on
> the agents and was able to replicate the error. So basically, running "sudo
> -u root /usr/lib/hadoop/bin/hadoop version 2>&1" gave me the same JAVA_HOME
> not set error. After I fixed that and restarted the agents, running the
> spark example still gave me the same error.
> 
> I ran the same examples on mesos 0.28.2, and it ran fine.
> 
> Any help regarding this would be appreciated.
> 
> *Additional Info :-*
> mesos version - 1.0.0
> hadoop version - 2.6.0-cdh5.7.2
> spark version - 2.0.0
> 
> Command used to run spark example - ./bin/spark-submit --class
> org.apache.spark.examples.SparkPi --master mesos://:7077
> --deploy-mode cluster --executor-memory 2G --total-executor-cores 4
> hdfs://testcluster/spark-examples_2.11-2.0.0.jar 100
> 


Re: Fwd: Unable to run spark examples on mesos 1.0

2016-08-05 Thread Stephen Gran
Hi,

You'll need to get a working hadoop install before that works.  Try 
adding JAVA_HOME and so forth to hadoop/libexec/hadoop-layout.sh

Cheers,

On 04/08/16 20:28, max square wrote:
> Hey guys ,
> I was trying out spark 2.0 examples to run on mesos+hadoop cluster but
> it keep failing with the following error message:-
>
> I0803 19:46:53.848696 12494 fetcher.cpp:498] Fetcher Info:
> 
> {"cache_directory":"\/tmp\/mesos\/fetch\/slaves\/587226cc-bece-422a-bb93-e3ef49075642-S1\/root","items":[{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\/\/testcluster\/spark-examples_2.11-2.0.0.jar"}},{"action":"BYPASS_CACHE","uri":{"extract":true,"value":"hdfs:\/\/testcluster\/spark-2.0.0-bin-hdfs-2.6.0-cdh5.7.1.tgz"}}],"sandbox_directory":"\/vol\/mesos\/data\/slaves\/587226cc-bece-422a-bb93-e3ef49075642-S1\/frameworks\/587226cc-bece-422a-bb93-e3ef49075642-0017\/executors\/driver-20160803194649-0001\/runs\/b1e9a92e-f004-4cdc-b936-52b32593d39f","user":"root"}
>
> I0803 19:46:53.850719 12494 fetcher.cpp:409] Fetching URI
> 'hdfs://testcluster/spark-examples_2.11-2.0.0.jar'
>
> I0803 19:46:53.850731 12494 fetcher.cpp:250] Fetching directly into
> the sandbox directory
>
> I0803 19:46:53.850746 12494 fetcher.cpp:187] Fetching URI
> 'hdfs://testcluster/spark-examples_2.11-2.0.0.jar'
> E0803 19:46:53.860776 12494 shell.hpp:106] Command
> '/usr/lib/hadoop/bin/hadoop version 2>&1' failed; this is the output:
> Error: JAVA_HOME is not set and could not be found.
> Failed to fetch 'hdfs://testcluster/spark-examples_2.11-2.0.0.jar':
> Failed to create HDFS client: Failed to execute
> '/usr/lib/hadoop/bin/hadoop version 2>&1'; the command was either
> not found or exited with a non-zero exit status: 1
> Failed to synchronize with agent (it's probably exited)
>
>
> To start out, I tried out the hadoop command which was giving the error
> on the agents and was able to replicate the error. So basically, running
> "sudo -u root /usr/lib/hadoop/bin/hadoop version 2>&1" gave me the same
> JAVA_HOME not set error. After I fixed that and restarted the agents,
> running the spark example still gave me the same error.
>
> I ran the same examples on mesos 0.28.2, and it ran fine.
>
> Any help regarding this would be appreciated.
>
> *Additional Info :-*
> mesos version - 1.0.0
> hadoop version - 2.6.0-cdh5.7.2
> spark version - 2.0.0
>
> Command used to run spark example - ./bin/spark-submit --class
> org.apache.spark.examples.SparkPi --master mesos://:7077
> --deploy-mode cluster --executor-memory 2G --total-executor-cores 4
> hdfs://testcluster/spark-examples_2.11-2.0.0.jar 100
>
>
>
>
>
>

-- 
Stephen Gran
Senior Technical Architect

picture the possibilities | piksel.com