When you say driver running on mesos can you explain how are you doing that...??
> On Mar 10, 2016, at 4:44 PM, Eran Chinthaka Withana
> wrote:
>
> Yanling I'm already running the driver on mesos (through docker). FYI, I'm
> running this on cluster mode with MesosClusterDispatcher.
>
> Mac (c
Yanling I'm already running the driver on mesos (through docker). FYI, I'm
running this on cluster mode with MesosClusterDispatcher.
Mac (client) > MesosClusterDispatcher > Driver running on Mesos -->
Workers running on Mesos
My next step is to run MesosClusterDispatcher in mesos through
Hi Everyone, a quick question with in this context. What is the underneath
persistent storage that you guys are using? With regards to this
containerized environment? Thanks
On Thursday, March 10, 2016, yanlin wang wrote:
> How you guys make driver docker within container to be reachable from
>
How you guys make driver docker within container to be reachable from spark
worker ?
Would you share your driver docker? i am trying to put only driver in docker
and spark running with yarn outside of container and i don’t want to use
—net=host
Thx
Yanlin
> On Mar 10, 2016, at 11:06 AM, Gui
Glad to hear it. Thanks all for sharing your solutions.
Le jeu. 10 mars 2016 19:19, Eran Chinthaka Withana
a écrit :
> Phew, it worked. All I had to do was to add *export
> SPARK_JAVA_OPTS="-Dspark.mesos.executor.docker.image=echinthaka/mesos-spark:0.23.1-1.6.0-2.6"
> *before calling spark-sub
Phew, it worked. All I had to do was to add *export
SPARK_JAVA_OPTS="-Dspark.mesos.executor.docker.image=echinthaka/mesos-spark:0.23.1-1.6.0-2.6"
*before calling spark-submit. Guillaume, thanks for the pointer.
Timothy, thanks for looking into this. Looking forward to see a fix soon.
Thanks,
Eran
Hi Timothy
What version of spark are you guys running?
>
I'm using Spark 1.6.0. You can see the Dockerfile I used here:
https://github.com/echinthaka/spark-mesos-docker/blob/master/docker/mesos-spark/Dockerfile
> And also did you set the working dir in your image to be spark home?
>
Yes I did
Hi Eran,
I need to investigate but perhaps that's true, we're using SPARK_JAVA_OPTS
to pass all the options and not --conf.
I'll take a look at the bug, but if you can try the workaround and see if
that fixes your problem.
Tim
On Thu, Mar 10, 2016 at 10:08 AM, Eran Chinthaka Withana <
eran.chin
Here is an example dockerfile, although it's a bit dated now if you build
it today it should still work:
https://github.com/tnachen/spark/tree/dockerfile/mesos_docker
Tim
On Thu, Mar 10, 2016 at 8:06 AM, Ashish Soni wrote:
> Hi Tim ,
>
> Can you please share your dockerfiles and configuration
Hi Tim ,
Can you please share your dockerfiles and configuration as it will help a
lot , I am planing to publish a blog post on the same .
Ashish
On Thu, Mar 10, 2016 at 10:34 AM, Timothy Chen wrote:
> No you don't need to install spark on each slave, we have been running
> this setup in Mesos
No you don't need to install spark on each slave, we have been running this
setup in Mesosphere without any problem at this point, I think most likely
configuration problem and perhaps a chance something is missing in the code to
handle some cases.
What version of spark are you guys running? An
You need to install spark on each mesos slave and then while starting container
make a workdir to your spark home so that it can find the spark class.
Ashish
> On Mar 10, 2016, at 5:22 AM, Guillaume Eynard Bontemps
> wrote:
>
> For an answer to my question see this:
> http://stackoverflow.co
For an answer to my question see this:
http://stackoverflow.com/a/35660466?noredirect=1.
But for your problem did you define the Spark.mesos.docker. home or
something like that property?
Le jeu. 10 mars 2016 04:26, Eran Chinthaka Withana
a écrit :
> Hi
>
> I'm also having this issue and can
Hi
I'm also having this issue and can not get the tasks to work inside mesos.
In my case, the spark-submit command is the following.
$SPARK_HOME/bin/spark-submit \
--class com.mycompany.SparkStarter \
--master mesos://mesos-dispatcher:7077 \ --name SparkStarterJob \
--driver-memory 1G \
--exe
14 matches
Mail list logo