Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9608#issuecomment-168780854
I think it's also note documenting this of how it can possibly work with
bridge mode. Also I think it's worth noting that user must explicitly map the
ports before hand
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10057#discussion_r48172947
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcher.scala
---
@@ -50,7 +50,11 @@ private[mesos] class MesosClusterDispatcher
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10057#discussion_r48210724
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkCuratorUtil.scala ---
@@ -35,8 +35,11 @@ private[spark] object SparkCuratorUtil extends Logging
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10057#discussion_r48210912
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterPersistenceEngine.scala
---
@@ -53,9 +53,12 @@ private[spark] trait
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10057#discussion_r48210760
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcher.scala
---
@@ -50,7 +50,10 @@ private[mesos] class MesosClusterDispatcher
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10057#issuecomment-166377013
@andrewor14 the cluster mode issue is fixed now, @dragos @mgummelt we can
use this patch through our tests
---
If your project is set up for it, you can reply
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10057#issuecomment-166534398
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10366#issuecomment-165742581
LGTM as well.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10329#issuecomment-165929749
@andrewor14 thanks a lot for the patience on this, this validates we need
to really invest in automated testing a lot of these things and hopefully don't
repeat again
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-165916177
@jayv Looks like you need to fix the line length > 100 style rule.
---
If your project is set up for it, you can reply to this email and have your
reply app
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10332#discussion_r47939790
--- Diff:
core/src/main/scala/org/apache/spark/deploy/rest/mesos/MesosRestServer.scala ---
@@ -94,7 +94,12 @@ private[mesos] class MesosSubmitRequestServlet
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-165704664
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10370#issuecomment-165704814
At first I was wondering if we just copy all scheduler properties will it
clash with the ones we used above (i.e --total-executor-cores), but looks like
the command
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10326#issuecomment-165229364
Don't have anything else to add besides what @dragos said, but seems like
it takes a while to get this updated. I vote for trying to merge this first as
this adds more
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10332#issuecomment-165190411
@dragos PTAL
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user tnachen opened a pull request:
https://github.com/apache/spark/pull/10332
[SPARK-12345][MESOS] Filter SPARK_HOME when submitting Spark jobs with
Mesos cluster mode.
SPARK_HOME is now causing problem with Mesos cluster mode since
spark-submit script has been changed
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10329#issuecomment-165206929
I have a fix that only affects Mesos cluster mode
https://github.com/apache/spark/pull/10332
If standalone never had a problem then I suggest we don't affect
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10329#issuecomment-165212087
SGTM, I don't think we ever should either.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r47812730
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -60,6 +63,11 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10332#issuecomment-165206313
Yes I would also want to just make changes on the Mesos side and not cause
any possible regression on standalone.
---
If your project is set up for it, you can reply
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10319#issuecomment-165203618
So IIUC stop is only invoked when an exception occured or shutdown hook is
invoked, where both cases it's an task that didn't really finish and
user/system want
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9027#issuecomment-165336426
@dragos just left some comments on this PR, sorry I wasn't able to take a
deep look earlier. I think there are some open issues that needs to be
addressed first. Take
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/9027#discussion_r47868921
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -239,44 +250,69 @@ private[spark] class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/9027#discussion_r47868971
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -239,44 +250,69 @@ private[spark] class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/9027#discussion_r47869013
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -304,20 +340,25 @@ private[spark] class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/9027#discussion_r47865499
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -210,12 +216,18 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10086#issuecomment-164772992
@srowen @andrewor14 sorry for the delay, it's updated now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r47725842
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -364,7 +379,22 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10086#issuecomment-164963792
Created jira and updated title @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10319#discussion_r47725778
--- Diff:
core/src/main/scala/org/apache/spark/executor/CoarseGrainedExecutorBackend.scala
---
@@ -45,6 +46,7 @@ private[spark] class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/9608#discussion_r47593516
--- Diff: core/src/main/scala/org/apache/spark/HttpServer.scala ---
@@ -152,6 +153,17 @@ private[spark] class HttpServer
Github user tnachen closed the pull request at:
https://github.com/apache/spark/pull/4027
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/9608#discussion_r47605974
--- Diff: core/src/main/scala/org/apache/spark/rpc/netty/NettyRpcEnv.scala
---
@@ -122,7 +122,8 @@ private[netty] class NettyRpcEnv(
@Nullable
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/4027#issuecomment-164125291
Yes at this moment I'll be closing this PR and move forward with another
proposal that is to use spark.executor.cores and have a new coarse grain
scheduler
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10086#issuecomment-162799550
@dragos just updated the docs
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/10057#discussion_r46922802
--- Diff:
core/src/main/scala/org/apache/spark/deploy/mesos/MesosClusterDispatcher.scala
---
@@ -50,7 +50,7 @@ private[mesos] class MesosClusterDispatcher
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10086#issuecomment-161880528
@andrewor14 pushed an update today, PTAL
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10086#issuecomment-161754931
@andrewor14 I don't tihnk this doc helps his problem, he's running into
issues where Spark configuration is not being passed down. I'm pushing a new
update here
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10057#issuecomment-161112433
I think actually just using spark.deploy.* seems like a better choice, as
in any case we don't really expect users to have different zookeepers deployed
and cluster
GitHub user tnachen opened a pull request:
https://github.com/apache/spark/pull/10086
Add documentation about submitting Spark with mesos cluster mode.
Adding more documentation about submitting jobs with mesos cluster mode.
You can merge this pull request into a Git repository
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10086#issuecomment-161152431
@andrewor14
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/10057#issuecomment-160852111
@andrewor14 @dragos PTAL
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user tnachen opened a pull request:
https://github.com/apache/spark/pull/10057
[SPARK-10647][MESOS] Fix zookeeper dir with mesos conf and add docs.
Fix zookeeper dir configuration used in cluster mode, and also add
documentation around these settings.
You can merge
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8872#issuecomment-160075491
@andrewor14 PTAL, it should be ready.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9886#issuecomment-159051811
+1 on having a fall back with a warning message as well.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/9608#discussion_r45654941
--- Diff: core/src/main/scala/org/apache/spark/HttpFileServer.scala ---
@@ -42,10 +42,11 @@ private[spark] class HttpFileServer(
fileDir.mkdir
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8610#discussion_r45656480
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -639,10 +640,11 @@ private[deploy] class Master(
// in the queue
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/4027#issuecomment-158668110
@andrewor14 I've updated the patch now. Originally you suggested me to look
at deploy/master.scala to try to use the same configurations like
spark.executor.cores
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9795#issuecomment-157807917
LGTM as well!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9752#issuecomment-157547860
Please add [MESOS] and the jira ticket [SPARK-11327] on the title so it
gets picked up the Spark pr tool.
---
If your project is set up for it, you can reply
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/9752#discussion_r45141605
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -422,6 +422,37 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9301#issuecomment-157558208
We started to add ubuntu and versions since it wasn't explicit what OS
image it was built upon.
@andrewor14 what are these docker images used for? I'm trying
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/9752#discussion_r45144395
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -422,6 +422,37 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9637#issuecomment-156244571
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9637#issuecomment-156244521
I think this is fine for now, I was thinking leaving some comments about
how it should be launched with Marathon but I think later we can add a example
json
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9301#issuecomment-156033306
I don't think there is a good way to keep them in sync, I think part of
upgrade mesos version will be requiring to update the docker image here as well.
---
If your
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8771#discussion_r44691089
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -244,13 +244,7 @@ private[spark] class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8771#discussion_r44691147
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -292,6 +286,18 @@ private[spark] class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8771#discussion_r44691350
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -301,6 +298,15 @@ private[spark] class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8771#discussion_r44691281
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -219,12 +219,9 @@ private[spark] class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8771#discussion_r44691399
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -301,6 +298,15 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8771#issuecomment-156186192
Just took another pass of this review, besides the naming suggestions and
style fixes overall it LGTM.
Also please update the title of this PR since it's updating
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9637#issuecomment-155923948
Only have one comment, otherwise everything else LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/9637#discussion_r44593380
--- Diff: docs/job-scheduling.md ---
@@ -56,36 +56,31 @@ provide another approach to share RDDs.
## Dynamic Resource Allocation
-Spark
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8872#discussion_r43587599
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -358,9 +358,10 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8872#issuecomment-152918251
@andrewor14 @dragos all comments should be addressed now
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9282#issuecomment-152581043
+1
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9282#issuecomment-151636017
Just curious why this suddenly becomes a problem, do you have any idea what
caused this?
---
If your project is set up for it, you can reply to this email and have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9282#issuecomment-151636255
And also +1 to merge this to fix users problems as well
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/9135#issuecomment-148730888
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8358#discussion_r41940139
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -655,15 +655,19 @@ private[spark] object Utils extends Logging {
// created
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8358#issuecomment-147883511
@Zariel I think we need to document this logic somewhere in the docs, but
otherwise I think we should merge this and make changes afterwards if needed.
And we'll need
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8358#discussion_r41939777
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -655,15 +655,19 @@ private[spark] object Utils extends Logging {
// created
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8872#issuecomment-147899923
@dragos added unit test and fixed desc and comments now.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/4960#issuecomment-145625978
Hi @AndriiOmelianenko, I have a PR out to fix that here
https://github.com/apache/spark/pull/8872
---
If your project is set up for it, you can reply to this email
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8358#issuecomment-144107346
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8358#issuecomment-144107047
retest please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8771#issuecomment-142994645
Yes please log debug :) But at least the information is available.
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8771#issuecomment-142655752
Actually I'm thinking what could be better is that if we can log what's the
exact condition that didn't pass which made us skip the offer. It becomes a bit
hard
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8639#discussion_r40182209
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
---
@@ -244,48 +248,56 @@ private[spark] class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8639#issuecomment-142533822
Thanks overall this LGTM. I know constraints is not yet supported for the
cluster scheduler, so ideally when we add that we should also apply this too.
---
If your
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8872#issuecomment-142428757
@andrewor14 PTAL
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user tnachen opened a pull request:
https://github.com/apache/spark/pull/8872
[SPARK-10749][MESOS] Support multiple roles with mesos cluster mode.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/tnachen/spark
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/4960#issuecomment-142130464
Ah this is indeed a bug, need to port the multiple roles logic that's in
coarse and fine grain scheduler to cluster scheduler. Will fix this asap
---
If your project
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/4960#issuecomment-142057737
Hi @ohal, you'll need to set spark.mesos.role when you launch the
dispatcher, which you can do so by setting the spark-defaults.conf in the
conf dir
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/4960#issuecomment-142066891
That's the spark properties for the job, but might not for the dispatcher.
The easiest way to check is to go to the Mesos UI and look at the
Dispatcher
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/4027#issuecomment-141617352
I see, I can see having a minimum CPU per executor useful, that's a easy
change in this patch, other than that i think this patch can achieve what you
want
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/4027#issuecomment-141582708
The thought of this patch is give user the flexibility to provide a max and
min, and also optionally allow multiple executors or not. So in your case you
just need
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8639#issuecomment-141115147
There is a big HTML table in the bottom of this file, can you also add it
to that list?
On Thu, Sep 17, 2015 at 4:55 AM, Akash Mishra <notific
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8349#issuecomment-140645786
This is only recently merged so this is not yet released, so Mesosphere
DCOS won't able to support Python yet.
And if you wan tot provide s3 you just need to give
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8671#discussion_r39575837
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -202,55 +207,86 @@ private[spark] class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8771#discussion_r39583378
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackendSuite.scala
---
@@ -184,4 +184,52 @@ class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8771#discussion_r39583419
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackendSuite.scala
---
@@ -184,4 +184,52 @@ class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8771#discussion_r39583404
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackendSuite.scala
---
@@ -184,4 +184,52 @@ class
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8771#discussion_r39583517
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackendSuite.scala
---
@@ -184,4 +184,52 @@ class
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8771#issuecomment-140588590
I think this is a good idea, can you do this for fine grain mode too?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8349#issuecomment-140469231
@viesti Hi there, this is meant to be submitting not to Mesos directly, but
to Spark mesos dispatcher. You should launch the Dispatcher and then change
your --master
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/8358#issuecomment-140448556
@andrewor14 besides the settings check I think it LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8358#discussion_r39530770
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -655,15 +655,19 @@ private[spark] object Utils extends Logging {
// created
Github user tnachen commented on a diff in the pull request:
https://github.com/apache/spark/pull/8007#discussion_r39194384
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -390,7 +390,7 @@ private[spark] class
201 - 300 of 625 matches
Mail list logo