Hi John,

With Spark on Mesos, each client (spark-submit) starts a SparkContext which
initializes its own SparkUI and framework. There is a default 4040 for the
Spark UI port, but if it's occupied Spark automatically tries ports
incrementally for you, so your next could be 4041 if it's available.

Driver is not shared between user, each user creates its own driver.

About slowness it's hard to say without any information, you need to tell
us your cluster setup, what mode you're Mesos with and if there is anything
else running in the cluster, the job, etc.

Tim

On Sat, Feb 14, 2015 at 5:06 PM, John Omernik <[email protected]> wrote:

> Hello all, I am running Spark on Mesos and I think I am love, but I
> have some questions. I am running the python shell via iPython
> Notebooks (Jupyter) and it works great, but I am trying to figure out
> how things are actually submitted... like for example, when I submit
> the spark app from the iPython notebook server, I am opening a new
> kernel and I see a new spark submit (similar to the below) for each
> kernel... but, how is that actually working on the cluster, I can
> connect to the spark server UI on 4040, but shouldn't there be a
> different one for each driver? Is that causing conflicts? after a
> while things seem to run slow is this due to some weird conflicts?
> Should I be specifying unique ports for each server? Is the driver
> shared between users? what about between kerne's for the same user?
> Curious if anyone has any insight.
>
> Thanks!
>
>
> java org.apache.spark.deploy.SparkSubmitDriverBootstrapper --master
> mesos://hadoopmapr3:5050 --driver-memory 1G --executor-memory 4096M
> pyspark-shell
>

Reply via email to