Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-86113945
@jongyoul looks good to me thanks :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-85594000
@jongyoul @tnachen imho its fine if CPU_PER_TASK will remain Integers,
however if your job is less CPU intensive, that might be beneficial to optimize
it, although I
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-85002672
@sryza you can request from Mesos fraction of CPU, however I haven't
realized that we have wrong type in this patch, we should change it to Double
instead of Int
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-85336463
@jongyoul I can try to test it out with allocating CPU to 0, but cannot
promise it will work, otherwise it's just doesn't make much sense imho, I'd
discuss with project
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-84733162
@sryza I don't think it actually need more than a single core, the issue is
you cannot give less than 1 CPU.
---
If your project is set up for it, you can reply
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-84059146
Hi @sryza @jongyoul,
To give an illustration of this, let's say I have 10 nodes, 64 cores each,
lets say 10 streaming jobs are running with 1 minute window (so every
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/5063#issuecomment-83225590
We can also consider more describing config options, like
spark.mesos.noOfAllocatedCoresPerExecutor ...
---
If your project is set up for it, you can reply to this email
Github user elyast commented on a diff in the pull request:
https://github.com/apache/spark/pull/5063#discussion_r26575818
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosSchedulerBackend.scala
---
@@ -67,6 +67,8 @@ private[spark] class
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-82104142
cool thanks
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-79046678
Sure its totally fine not to share, but at least it should be possible to
configure allocation. Allocating 1 CPU per executor may just too much,
obviously it depends how
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-78799120
One comment, however if you run multiple Spark applications even tough
executor-id == slave-id, multiple executors can be started on the same host.
(And every one of them
Github user elyast commented on a diff in the pull request:
https://github.com/apache/spark/pull/4778#discussion_r25488122
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/HistoryServerSuite.scala ---
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software
Github user elyast commented on a diff in the pull request:
https://github.com/apache/spark/pull/4778#discussion_r25488124
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/HistoryServerSuite.scala ---
@@ -0,0 +1,53 @@
+/*
+ * Licensed to the Apache Software
GitHub user elyast opened a pull request:
https://github.com/apache/spark/pull/4778
SPARK-2168 [Spark core] Use relative URIs for the app links in the History
Server.
As agreed in PR #1160 adding test to verify if history server generates
relative links to applications.
You can
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/1160#issuecomment-73587979
Fine with me, I will add tests on master with new PR
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user elyast closed the pull request at:
https://github.com/apache/spark/pull/1160
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/1160#issuecomment-68077764
@andrewor14 I think it's been fixed on master branch, so if you don't want
to release maintenance release for 1.0.x then I would suggest to close it.
---
If your project
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-54734035
Hi @avulanov I run your NeuralNetworkSuite (from your fork / neuralnetwork
brach), however it fails randomly, are you sure you have implemented in
correctly
Github user elyast commented on a diff in the pull request:
https://github.com/apache/spark/pull/1160#discussion_r16685916
--- Diff:
core/src/test/scala/org/apache/spark/deploy/history/HistoryServerSuite.scala ---
@@ -0,0 +1,52 @@
+/*
+ * Licensed to the Apache Software
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/1160#issuecomment-47294490
I have added more descriptive title
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user elyast commented on the pull request:
https://github.com/apache/spark/pull/1160#issuecomment-47294545
so should I open another PR for master branch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user elyast opened a pull request:
https://github.com/apache/spark/pull/1160
SPARK-2168 Spark core
Removing full URI leaving only relative path in link to the completed
application plus unit test
You can merge this pull request into a Git repository by running:
$ git
22 matches
Mail list logo