Hello all, I am running Spark on Mesos and I think I am love, but I have some questions. I am running the python shell via iPython Notebooks (Jupyter) and it works great, but I am trying to figure out how things are actually submitted... like for example, when I submit the spark app from the iPython notebook server, I am opening a new kernel and I see a new spark submit (similar to the below) for each kernel... but, how is that actually working on the cluster, I can connect to the spark server UI on 4040, but shouldn't there be a different one for each driver? Is that causing conflicts? after a while things seem to run slow is this due to some weird conflicts? Should I be specifying unique ports for each server? Is the driver shared between users? what about between kerne's for the same user? Curious if anyone has any insight.
Thanks! java org.apache.spark.deploy.SparkSubmitDriverBootstrapper --master mesos://hadoopmapr3:5050 --driver-memory 1G --executor-memory 4096M pyspark-shell

