Hi Eran,
I need to investigate but perhaps that's true, we're using SPARK_JAVA_OPTS
to pass all the options and not --conf.
I'll take a look at the bug, but if you can try the workaround and see if
that fixes your problem.
Tim
On Thu, Mar 10, 2016 at 10:08 AM, Eran Chinthaka Withana <
eran.chin
Here is an example dockerfile, although it's a bit dated now if you build
it today it should still work:
https://github.com/tnachen/spark/tree/dockerfile/mesos_docker
Tim
On Thu, Mar 10, 2016 at 8:06 AM, Ashish Soni wrote:
> Hi Tim ,
>
> Can you please share your dockerfiles and configuration
below command gets issued
>
> "Cmd": [
> "-c",
> "./bin/spark-submit --name org.apache.spark.examples.SparkPi
> --master mesos://10.0.2.15:5050 --driver-cores 1.0 --driver-memory 1024M
> --class org.apache.spark.examples.SparkPi
;Cmd": [
> "-c",
>* "./bin/spark-submit --name PI Example --master
> mesos://10.0.2.15:5050 <http://10.0.2.15:5050> --driver-cores 1.0
> --driver-memory 1024M --class org.apache.spark.examples.SparkPi
> $MESOS_SANDBOX/spark-examples-1.
;>>>>>> container in single host ( mesos and marathon also as docker container )
>>>>>>> and everything comes up fine but when i try to launch the spark shell i
>>>>>>> get below error
>>>>>>&
ocker containers for you.
Tim
On Mon, Feb 29, 2016 at 7:36 AM, Ashish Soni wrote:
> Yes i read that and not much details here.
>
> Is it true that we need to have spark installed on each mesos docker
> container ( master and slave ) ...
>
> Ashish
>
> On Fri, Feb 26, 201
https://spark.apache.org/docs/latest/running-on-mesos.html should be the
best source, what problems were you running into?
Tim
On Fri, Feb 26, 2016 at 11:06 AM, Yin Yang wrote:
> Have you read this ?
> https://spark.apache.org/docs/latest/running-on-mesos.html
>
> On Fri, Feb 26, 2016 at 11:03
Mesos does provide some benefits and features, such as the ability to
launch all the Spark pieces in Docker and also Mesos resource scheduling
features (weights, roles), and if you plan to also use HDFS/Cassandra there
are existing frameworks that are actively maintained by us.
That said when ther
Hi Duc,
Are you running Spark on Mesos with cluster mode? And what's your cluster
mode submission, and version of Spark are you running?
Tim
On Sat, Jan 30, 2016 at 8:19 AM, PhuDuc Nguyen
wrote:
> I have a spark job running on Mesos in multi-master and supervise mode. If
> I kill it, it is res
is possible, but no details provided.. Please help
>
>
> Thanks
>
> Sathish
>
>
>
>
> On Mon, Sep 21, 2015 at 11:54 AM Tim Chen wrote:
>
>> Hi John,
>>
>> There is no other blog post yet, I'm thinking to do a series of posts but
>> so far
gt; dispatcher UI, you should see the port in the dispatcher logs itself.
>
>
> Yes, this job is not listed under that UI. Hence my confusion.
>
> Thanks,
> - Alan
>
> On Fri, Oct 2, 2015 at 11:49 AM, Tim Chen wrote:
>
>> So if there is no jobs to run the d
x00' looking for beginning of value"
>
> (that's coming from the spark-dispatcher docker).
>
> Thanks!
> - Alan
>
> On Fri, Oct 2, 2015 at 11:36 AM, Tim Chen wrote:
>
>> Do you have jobs enqueued? And if none of the jobs matches any offer it
Do you have jobs enqueued? And if none of the jobs matches any offer it
will just decline it.
What's your job resource specifications?
Tim
On Fri, Oct 2, 2015 at 11:34 AM, Alan Braithwaite
wrote:
> Hey All,
>
> Using spark with mesos and docker.
>
> I'm wondering if anybody's seen the behavior
s coarse
>> mode other than allocation of multiple mesos tasks vs 1 mesos task. Clearly
>> spark is not managing memory in the same way.
>>
>> Thanks,
>> -Utkarsh
>>
>>
>> On Fri, Sep 25, 2015 at 9:17 AM, Tim Chen wrote:
>>
>>> Hi Utkar
63980-1-mesos_slave1_qa_uswest2.qasql.opentable.com-us_west_2a/tail/stderr#615779>15/09/22
>> 20:18:17 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:
>> OutputCommitCoordinator stopped!
>>
>> <http://singularity-qa-uswest2.otenv.com/task/ds-tetris-simspark-usengar.201
Hi Utkarsh,
Just to be sure you originally set coarse to false but then to true? Or is
it the other way around?
Also what's the exception/stack trace when the driver crashed?
Coarse grain mode per-starts all the Spark executor backends, so has the
least overhead comparing to fine grain. There is
What configuration have you used, and what are the slaves configuration?
Possiblity all other nodes either don't have enough resources, are is using
a another role that's preventing from the executor to be launched.
Tim
On Mon, Sep 21, 2015 at 1:58 PM, John Omernik wrote:
> I have a happy heal
;> The assumption that the executor has no default properties set in it's
>> environment through the docker container. Correct me if I'm wrong, but any
>> properties which are unset in the SparkContext will come from the
>> environment of the executor will it n
Hi John,
There is no other blog post yet, I'm thinking to do a series of posts but
so far haven't get time to do that yet.
Running Spark in docker containers makes distributing spark versions easy,
it's simple to upgrade and automatically caches on the slaves so the same
image just runs right awa
Hi John,
Sorry haven't get time to respond to your questions over the weekend.
If you're running client mode, to use the Docker/Mesos integration
minimally you just need to set the image configuration
'spark.mesos.executor.docker.image' as stated in the documentation, which
Spark will use this im
re, or
>> at least allow the user to inform the dispatcher through spark-submit that
>> those properties will be available once the job starts.
>>
>> Finally, I don't think the dispatcher should crash in this event. It
>> seems not exceptional that a job is misc
Hi Philip,
I've included documentation in the Spark/Mesos doc (
http://spark.apache.org/docs/latest/running-on-mesos.html), where you can
start the MesosShuffleService with sbin/start-mesos-shuffle-service.sh
script.
The shuffle service needs to be started manually for Mesos on each slave
(one wa
issue regarding improvement of the docs? For those of us who are
> gaining the experience having such a pointer is very helpful.
>
> Tom
>
> From: Tim Chen
> Date: Thursday, September 10, 2015 at 10:25 AM
> To: Tom Waterhouse
> Cc: "user@spark.apache.org"
Hi Tom,
Sorry the documentation isn't really rich, since it's probably assuming
users understands how Mesos and framework works.
First I need explain the rationale of why create the dispatcher. If you're
not familiar with Mesos yet, each node in your datacenter is installed a
Mesos slave where it
Hi Adrian,
Spark is expecting a specific naming of the tgz and also the folder name
inside, as this is generated by running make-distribution.sh --tgz in the
Spark source folder.
If you use a Spark 1.4 tgz generated with that script with the same name
and upload to HDFS again, fix the URI then it
I'm not sure what you're looking for, since you can't really compare
Standalone with YARN or Mesos, as Standalone is assuming the Spark
workers/master owns the cluster, and YARN/Mesos is trying to share the
cluster among different applications/frameworks.
And when you refer to resource utilization
e even/much
> better control.
>
> Thanks,
> Ajay
>
> On Wed, Aug 12, 2015 at 4:18 AM, Tim Chen wrote:
>
>> Yes the options are not that configurable yet but I think it's not hard
>> to change it.
>>
>> I have a patch out actually specifically able t
. I just got a chance to look at a
>> similar spark user list mail, but no answer yet. So does mesos allow
>> setting the number of executors and cores? Is there a default number it
>> assumes?
>>
>> On Mon, Jan 5, 2015 at 5:07 PM, Tim Chen wrote:
>>
>>> Forg
Hi Anton,
Client mode we haven't populated the webui link and only did so for cluster
mode.
If you like you can open a JIRA and it should be a easy ticket for anyone
to work on.
Tim
On Wed, Jul 29, 2015 at 4:27 AM, Anton Kirillov
wrote:
> Hi everyone,
>
> I’m trying to get access to Spark we
Hi Haripriya,
Your master has registered it's public ip to be 127.0.0.1:5050 which won't
be able to be reached by the slave node.
If mesos didn't pick up the right ip you can specifiy one yourself via the
--ip flag.
Tim
On Mon, Jul 27, 2015 at 8:32 PM, Haripriya Ayyalasomayajula <
aharipriy...@
Depends on how you run 1.3/1.4 versions of Spark, if you're giving it
different Docker images / tar balls of Spark, technically it should work
since it's just launching a driver for you at the end of the day.
However, I haven't really tried it so let me know if you run into problems
with it.
Tim
p
> sources for details. Not sure if there's another way.
>
>
>
>>
>> *From: *Marcelo Vanzin
>> *Sent: *Friday, June 26, 2015 6:20 PM
>> *To: *Dave Ariens
>> *Cc: *Tim Chen; Olivier Girardot; user@spark.apache.org
>> *Subject: *Re: Accessing
Mesos do support running containers as specific users passed to it.
Thanks for chiming in, what else does YARN do with Kerberos besides keytab
file and user?
Tim
On Fri, Jun 26, 2015 at 1:20 PM, Marcelo Vanzin wrote:
> On Fri, Jun 26, 2015 at 1:13 PM, Tim Chen wrote:
>
>> So c
So correct me if I'm wrong, sounds like all you need is a principal user
name and also a keytab file downloaded right?
I'm adding support from spark framework to download additional files along
side your executor and driver, and one workaround is to specify a user
principal and keytab file that ca
It seems like there is another thread going on:
http://answers.mapr.com/questions/163353/spark-from-apache-downloads-site-for-mapr.html
I'm not particularly sure why, seems like the problem is that getting the
current context class loader is returning null in this instance.
Do you have some repr
; (I didn't try that yet)
>>
>> Should we open a JIRA for this functionality?
>>
>> -kr, Gerard.
>>
>>
>>
>>
>> [1]
>> https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/Logging.scala#L128
>> [2]
>> h
-- Forwarded message --
From: Tim Chen
Date: Thu, May 28, 2015 at 10:49 AM
Subject: Re: [Streaming] Configure executor logging on Mesos
To: Gerard Maas
Hi Gerard,
The log line you referred to is not Spark logging but Mesos own logging,
which is using glog.
Our own executor
Can you share your exact spark-submit command line?
And also cluster mode is not yet released yet (1.4) and doesn't support
spark-shell, so I think you're just using client mode unless you're using
latest master.
Tim
On Tue, May 19, 2015 at 8:57 AM, Panagiotis Garefalakis
wrote:
> Hello all,
>
Hi Ankur,
This is a great question as I've heard similar concerns about Spark on
Mesos.
At the time when I started to contribute to Spark on Mesos approx half year
ago, the Mesos scheduler and related code hasn't really got much attention
from anyone and it was pretty much in maintenance mode.
A
2.4.tgz to hdfs, SPARK_EXECUTOR_URI is set
> in spark-env.sh, and in the Environment section of the web UI I see this
> picked up in the spark.executor.uriparameter. I checked and the URI is
> reachable by the slaves: an `hdfs dfs -stat $SPARK_EXECUTOR_URI` is
> successful.
> >
&g
Hi Stephen,
It looks like Mesos slave was most likely not able to launch some mesos
helper processes (fetcher probably?).
How did you install Mesos? Did you build from source yourself?
Please install Mesos through a package or actually from source run make
install and run from the installed bina
Hi Stephen,
Sometimes it's just missing something simple, either like a user name
problem or file dependency, etc.
Can you share what's in the stdout/stderr in your task sandbox directory
(available via Mesos UI, clicking on the task and sandbox)?
And also super helpful if you can find in the sl
Linux OOM throws SIGTERM, but if I remember correctly JVM handles heap
memory limits differently and throws OutOfMemoryError and eventually sends
SIGINT.
Not sure what happened but the worker simply received a SIGTERM signal, so
perhaps the daemon was terminated by someone or a parent process. Jus
(Adding spark user list)
Hi Tom,
If I understand correctly you're saying that you're running into memory
problems because the scheduler is allocating too much CPUs and not enough
memory to acoomodate them right?
In the case of fine grain mode I don't think that's a problem since we have
a fixed
Hi Ankur,
There isn't a way to do that yet, but it's simple to add.
Can you create a JIRA in Spark for this?
Thanks!
Tim
On Fri, Apr 3, 2015 at 1:08 PM, Ankur Chauhan
wrote:
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi,
>
> I am trying to figure out if there is a way to tell the
Hi there,
It looks like while trying to launch the executor (or one of the process
like the fetcher to fetch the uris) was failing because of the dependencies
problem you see. Your mesos-slave shouldn't be able to run though, were you
running 0.20.0 slave and upgraded to 0.21.0? We introduced the
Hi Gerard,
As others has mentioned I believe you're hitting Mesos-1688, can you
upgrade to the latest Mesos release (0.21.1) and let us know if it resolves
your problem?
Thanks,
Tim
On Tue, Jan 27, 2015 at 10:39 AM, Sam Bessalah
wrote:
> Hi Geraard,
> isn't this the same issueas this?
> https
Just throwing this out here, there is existing PR to add docker support for
spark framework to launch executors with docker image.
https://github.com/apache/spark/pull/3074
Hopefully this will be merged sometime.
Tim
On Thu, Jan 15, 2015 at 9:18 AM, Nicholas Chammas <
nicholas.cham...@gmail.com
Hi Ethan,
How are you specifying the master to spark?
Able to recover from master failover is already handled by the underlying
Mesos scheduler, but you have to use zookeeper instead of directly passing
in the master uris.
Tim
On Mon, Jan 12, 2015 at 12:44 PM, Ethan Wolf
wrote:
> We are runni
How did you run this benchmark, and is there a open version I can try it
with?
And what is your configurations, like spark.locality.wait, etc?
Tim
On Thu, Jan 8, 2015 at 11:44 AM, mvle wrote:
> Hi,
>
> I've noticed running Spark apps on Mesos is significantly slower compared
> to
> stand-alone
n the Job level? Aka, the resource life time equals the job life
> time? Or on the stage level?
>
> One more question for the Mesos fine-grain mode. How is the overhead
> of resource allocation and release? In MapReduce, a noticeable time is
> spend on waiting the resource allocat
Hi Xuelin,
I can only speak about Mesos mode. There are two modes of management in
Spark's Mesos scheduler, which are fine-grain mode and coarse-grain mode.
In fine grain mode, each spark task launches one or more spark executors
that only live through the life time of the task. So it's comparabl
Forgot to hit reply-all.
-- Forwarded message --
From: Tim Chen
Date: Sun, Jan 4, 2015 at 10:46 PM
Subject: Re: Controlling number of executors on Mesos vs YARN
To: mvle
Hi Mike,
You're correct there is no such setting in for Mesos coarse grain mode,
since the assumpti
when we have more
> multi-tenancy in the cluster, but for now, this is not the case.
>
> Thanks,
>
> Josh
>
>
> On 24 December 2014 at 06:22, Tim Chen wrote:
> >
> > Hi Josh,
> >
> > If you want to cap the amount of memory per executor in Coarse grain
> m
Hi Josh,
If you want to cap the amount of memory per executor in Coarse grain mode,
then yes you only get 240GB of memory as you mentioned. What's the reason
you don't want to raise the capacity of memory you use per executor?
In coarse grain mode the Spark executor is long living and it internal
55 matches
Mail list logo