Hi Ajatix.
Yes the HADOOP_HOME is set on the nodes and i did update the bash.
As I said, adding MESOS_HADOOP_HOME did not work.
But what is causing the original error : "Java.lang.Error:
java.io.IOException: failure to login " ?
--
Thanks
--
View this message in context:
http://apache-spa
I do assume that you've added HADOOP_HOME to you environment variables.
Otherwise, you could fill the actual path of hadoop on your cluster. Also,
did you do update the bash?
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-not-working-with-mesos-tp6806
Thanks for the reply Ajatix.
Adding MESOS_HADOOP_HOME to my .bashrc gives an error while trying to start
mesos-master:
Failed to load unknown flag 'hadoop_home'
Usage: lt-mesos-master [...]
Couldn't get any help on this from google. Any suggestions?
--
Thanks.
--
View this message in contex
Since $HADOOP_HOME is deprecated, try adding it to the Mesos configuration
file.
Add `export MESOS_HADOOP_HOME=$HADOOP_HOME to ~/.bashrc` and that should
solve your error
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/Spark-not-working-with-mesos-tp6806p69
Thanks for the reply Akhil
I saw the logs in /tmp/mesos and found that my tar.gz was not properly
created. I corrected that but now got another error which i can't find an
answer for on google.
The error is pretty much the same
"org.apache.spark.SparkException: Job aborted: Task 0.0:6 failed 4 ti
http://spark.apache.org/docs/latest/running-on-mesos.html#troubleshooting-and-debugging
If you are not able to find the logs in /var/log/mesos
Do check in /tmp/mesos/ and you can see your applications id and all just
like in the $SPARK_HOME/work directory.
Thanks
Best Regards
On Wed, Jun
Thanks for the reply Akhil.
I created a tar.gz of created by make-distribution.sh which is accessible
from all the slaves (I checked it using hadoop fs -ls /path/). Also there
are no worker logs printed in $SPARK_HOME/work/ directory on the workers
(which are otherwise printed if i run without usin
1. Make sure your spark-*.tgz that you created by make_distribution.sh is
accessible by all the slaves nodes.
2. Check the worker node logs.
Thanks
Best Regards
On Tue, Jun 3, 2014 at 8:13 PM, praveshjain1991
wrote:
> I set up Spark-0.9.1 to run on mesos-0.13.0 using the steps mentioned he