Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17433682
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -396,19 +352,27 @@ trait ClientBase extends Logging {
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/2360#discussion_r17433769
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -21,12 +21,9 @@ import java.io.IOException
import
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2144#issuecomment-55297241
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20161/consoleFull)
for PR 2144 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17434084
--- Diff:
yarn/alpha/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
@@ -45,120 +46,97 @@ class Client(clientArgs: ClientArguments, hadoopConf:
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/2362#issuecomment-55297787
Can your do some benchmark to show the difference?
I'm in doubt that caching the serialized data will better than caching the
original objects, the former can
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17434529
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientArguments.scala
---
@@ -35,28 +34,57 @@ class ClientArguments(val args:
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17434531
--- Diff:
yarn/alpha/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
@@ -45,120 +46,97 @@ class Client(clientArgs: ClientArguments,
Github user sarutak commented on a diff in the pull request:
https://github.com/apache/spark/pull/2360#discussion_r17434692
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -21,12 +21,9 @@ import java.io.IOException
import
Github user sarutak closed the pull request at:
https://github.com/apache/spark/pull/2360
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
GitHub user andrewor14 opened a pull request:
https://github.com/apache/spark/pull/2363
[Spark-3490] Disable SparkUI for tests
We currently open many ephemeral ports during the tests, and as a result we
occasionally can't bind to new ones. This has caused the `DriverSuite` and the
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17434858
--- Diff:
yarn/alpha/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
@@ -37,7 +35,10 @@ import org.apache.spark.deploy.SparkHadoopUtil
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17434969
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala
---
@@ -142,14 +136,13 @@ object YarnSparkHadoopUtil {
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17435086
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/YarnSparkHadoopUtil.scala
---
@@ -84,7 +83,7 @@ class YarnSparkHadoopUtil extends
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2363#issuecomment-55299377
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20162/consoleFull)
for PR 2363 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2363#issuecomment-55299526
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20162/consoleFull)
for PR 2363 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2356#issuecomment-55299387
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20163/consoleFull)
for PR 2356 at commit
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17435311
--- Diff:
yarn/alpha/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
@@ -45,120 +46,97 @@ class Client(clientArgs: ClientArguments,
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17435604
--- Diff:
yarn/alpha/src/main/scala/org/apache/spark/deploy/yarn/Client.scala ---
@@ -45,120 +46,97 @@ class Client(clientArgs: ClientArguments,
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17435800
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientArguments.scala
---
@@ -35,28 +34,57 @@ class ClientArguments(val args:
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17436048
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -417,41 +381,136 @@ trait ClientBase extends Logging {
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2291#issuecomment-55301417
The last build failure was caused by streaming suites.
But I do need to update the data type parsing logic in Python.
---
If your project is set up for it,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2363#issuecomment-55302217
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20164/consoleFull)
for PR 2363 at commit
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/2363#discussion_r17436782
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -220,8 +220,14 @@ class SparkContext(config: SparkConf) extends Logging {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2241#issuecomment-55302989
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/70/consoleFull)
for PR 2241 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2361#issuecomment-55303201
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20158/consoleFull)
for PR 2361 at commit
Github user staple commented on the pull request:
https://github.com/apache/spark/pull/2362#issuecomment-55303441
Hi, I implemented this per discussion here
https://github.com/apache/spark/pull/2347#issuecomment-55181535, assuming I
understood the comment correctly. The context is
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17437304
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -396,19 +352,27 @@ trait ClientBase extends Logging {
}
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17437744
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -417,41 +381,136 @@ trait ClientBase extends Logging {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2338#issuecomment-55304894
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/69/consoleFull)
for PR 2338 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2144#issuecomment-55305217
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20159/consoleFull)
for PR 2144 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17438506
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -598,46 +675,44 @@ object ClientBase extends Logging {
*
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/2327#issuecomment-55306364
I need to look this over still, but want to remove WIP?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/2327#discussion_r17438553
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/columnar/ColumnAccessor.scala ---
@@ -51,10 +51,12 @@ private[sql] abstract class
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2144#issuecomment-55306714
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20161/consoleFull)
for PR 2144 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1592#issuecomment-55306732
@staple The Jenkins pull request builder is in an odd state of flux right
now. I've manually re-triggered your build (I should have self-service retest
this please
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1592#issuecomment-55306927
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/71/consoleFull)
for PR 1592 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1592#issuecomment-55307181
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20165/consoleFull)
for PR 1592 at commit
Github user staple commented on the pull request:
https://github.com/apache/spark/pull/1592#issuecomment-55307274
@JoshRosen Great, thanks for your help!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user marmbrus commented on a diff in the pull request:
https://github.com/apache/spark/pull/2327#discussion_r17439088
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQuerySuite.scala
---
@@ -295,8 +295,16 @@ class HiveQuerySuite extends
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2363#discussion_r17439257
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -220,8 +220,14 @@ class SparkContext(config: SparkConf) extends Logging {
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17439313
--- Diff:
yarn/common/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerBackend.scala
---
@@ -36,113 +36,114 @@ private[spark] class
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17439372
--- Diff:
yarn/common/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerBackend.scala
---
@@ -36,113 +36,114 @@ private[spark] class
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/2327#issuecomment-55308174
Nice speed ups. I think they might be even more pronounced when there are
multiple threads fighting for the GC.
Minor comments only. Will merge after they are
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/2347#issuecomment-55308192
Is it possible that add the cache for RDD automatically instead of show an
warning, if the cache is always helpful?
---
If your project is set up for it, you can reply
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/2362#issuecomment-55308253
I think you could pick any algorithm that you think will have most
difference.
For repeated warning, maybe it's not hard to make it show only once.
---
If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2356#issuecomment-55308654
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20163/consoleFull)
for PR 2356 at commit
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/2350#issuecomment-55308825
Hey all - regarding backwards compatibility. I agree we definitely need to
preserve all of the publicly documented interfaces, including environment
variables etc. And
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2363#issuecomment-55309033
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20164/consoleFull)
for PR 2363 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/1951#issuecomment-55309457
This looks good to me, so I'm going to merge it into master. Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2363#issuecomment-55309752
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20166/consoleFull)
for PR 2363 at commit
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/2350#issuecomment-55309683
+1 for making Client private. This should go through SparkSubmit, and, as
Patrick mentioned, I'd be surprised if we haven't broken any code that's
relying on that already.
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17440654
--- Diff:
yarn/common/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerBackend.scala
---
@@ -36,113 +36,114 @@ private[spark] class
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/2352#issuecomment-55310300
@chenghao-intel Actually this issue has bothered us for some time, and
makes the Maven build on Jenkins fail. But we had never reproduce it locally...
Would you mind
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/2226#issuecomment-55310313
@liancheng has a few more style suggestions then we will merge.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user brndnmtthws commented on the pull request:
https://github.com/apache/spark/pull/1358#issuecomment-55310679
Yep, also hitting this same problem. We're running Spark 1.0.2 and Mesos
0.20.0.
From a quick analysis, it looks like a bug in Spark.
---
If your project
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1951
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/2363#issuecomment-55310770
There are two old JIRAs that seem relevant:
- [SPARK-2100](https://issues.apache.org/jira/browse/SPARK-2100): Allow
users to disable Jetty Spark UI in local
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/1846#issuecomment-55310967
Thanks! Merged to master.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2363#issuecomment-55311263
Oops, thanks @JoshRosen. Mine seems to be a duplicate.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/1846
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user tgravescs commented on the pull request:
https://github.com/apache/spark/pull/2350#issuecomment-55311722
I'm kind of ok with making the Client class private, but the object needs
to stay public for backwards compatibility through spark-class.
---
If your project
Github user shaneknapp closed the pull request at:
https://github.com/apache/spark/pull/2361
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/1330#issuecomment-55312782
@witgo Sorry, I had not realized that this had not been updated since the
discussions. Just tested it, and it worked for me. LGTM
---
If your project is set up for
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2333#discussion_r17442957
--- Diff: core/src/main/scala/org/apache/spark/ui/env/EnvironmentPage.scala
---
@@ -26,6 +29,23 @@ import org.apache.spark.ui.{UIUtils, WebUIPage}
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2333#discussion_r17443083
--- Diff: core/src/main/scala/org/apache/spark/ui/env/EnvironmentPage.scala
---
@@ -26,6 +29,23 @@ import org.apache.spark.ui.{UIUtils, WebUIPage}
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2333#discussion_r17443185
--- Diff: core/src/main/scala/org/apache/spark/ui/exec/ExecutorsPage.scala
---
@@ -44,6 +47,30 @@ private case class ExecutorSummaryInfo(
private[ui]
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2333#discussion_r17443226
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressPage.scala ---
@@ -31,6 +36,35 @@ private[ui] class JobProgressPage(parent:
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2333#discussion_r17443244
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressPage.scala ---
@@ -31,6 +36,35 @@ private[ui] class JobProgressPage(parent:
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2333#discussion_r17443265
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressPage.scala ---
@@ -31,6 +36,35 @@ private[ui] class JobProgressPage(parent:
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2333#discussion_r17443289
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/PoolPage.scala ---
@@ -30,6 +35,35 @@ private[ui] class PoolPage(parent: JobProgressTab)
extends
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2333#discussion_r17443318
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala ---
@@ -20,17 +20,76 @@ package org.apache.spark.ui.jobs
import java.util.Date
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2333#discussion_r17443332
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala ---
@@ -20,17 +20,76 @@ package org.apache.spark.ui.jobs
import java.util.Date
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2333#issuecomment-55316029
Hi @sarutak, thanks for working on this feature. There has been request of
this on the mailing list and it's good to see this being done. There are a lot
of style
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2363#issuecomment-55316106
@tdas I made a few changes in the streaming code to get this to work, can
you verify?
---
If your project is set up for it, you can reply to this email and have your
Github user chutium commented on the pull request:
https://github.com/apache/spark/pull/1612#issuecomment-55317048
thanks for the review, i will try to improve it soon, adding more external
datasources is always helpful, then we can use Spark SQL as a data integration
platform, and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2241#issuecomment-55317095
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/70/consoleFull)
for PR 2241 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2363#issuecomment-55317764
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20166/consoleFull)
for PR 2363 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1592#issuecomment-55321104
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/71/consoleFull)
for PR 1592 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1592#issuecomment-55321504
[QA tests have
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20165/consoleFull)
for PR 1592 at commit
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/2364
[SPARK-3390][SQL] sqlContext.jsonRDD fails on a complex structure of JSON
array and JSON object nesting
This PR aims to correctly handle JSON arrays in the type of
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2364#issuecomment-55322663
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20167/consoleFull)
for PR 2364 at commit
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/2350#issuecomment-55326314
@andrewor14 thanks for working on this, this was next on my things to
clean up when I find some time list. :-) Didn't see anything too controversial
aside from what has
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/2320#issuecomment-55327730
@huozhanfeng I don't think there's any way to transfer files securely to
workers right now. Perhaps a mode where the launcher / driver uses HDFS to
distribute files
Github user tgravescs commented on a diff in the pull request:
https://github.com/apache/spark/pull/2350#discussion_r17449666
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientArguments.scala
---
@@ -35,28 +34,57 @@ class ClientArguments(val args:
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2350#issuecomment-55330568
Thanks @vanzin and @tgravescs for looking at this. I filed a parent
umbrella JIRA at [SPARK-3492](https://issues.apache.org/jira/browse/SPARK-3492)
to group these
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/2363#issuecomment-55331079
LGTM from the streaming UI point of view. Though I would run the ignored UI
tests locally to make sure we havent broken anything.
---
If your project is set up for it, you
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2363#issuecomment-55331068
[QA tests have
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/20168/consoleFull)
for PR 2363 at commit
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2338#issuecomment-55333047
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2338#discussion_r17451124
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -360,7 +360,13 @@ private[spark] class Executor(
if
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/2346#issuecomment-55333468
Yup, LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/1558#issuecomment-55333925
retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/2338#issuecomment-55334340
I don't have any great ideas for how to write a test for it, but this looks
good to me as well.
---
If your project is set up for it, you can reply to this email and have
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/1558#issuecomment-55334577
Hi @tsudukim, how does the user see the incomplete applications? As @vanzin
suggested, the semantics of a history server is that it displays completed
applications
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1558#issuecomment-55334702
QA tests have started for PR 1558. This patch DID NOT merge cleanly!
brView progress:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/1558#issuecomment-55334841
QA results for PR 1558:br- This patch FAILED unit tests.brbrFor more
information see test
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/1558#issuecomment-55334845
Also, looks like this has merge conflicts. It would be great if you could
rebase to master. Thanks!
---
If your project is set up for it, you can reply to this email
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2336#discussion_r17452035
--- Diff:
core/src/main/scala/org/apache/spark/ui/jobs/JobProgressListener.scala ---
@@ -242,7 +242,8 @@ class JobProgressListener(conf: SparkConf)
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2336#discussion_r17452130
--- Diff: python/pyspark/shuffle.py ---
@@ -68,6 +68,11 @@ def _get_local_dirs(sub):
return [os.path.join(d, python, str(os.getpid()), sub) for d
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/2336#discussion_r17452140
--- Diff: python/pyspark/shuffle.py ---
@@ -486,15 +496,18 @@ def sorted(self, iterator, key=None, reverse=False):
if len(chunk) batch:
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/918#issuecomment-55335538
Interesting idea. @kalpit have you looked into using accumulators for
custom metrics instead? See #1309.
---
If your project is set up for it, you can reply to this
1 - 100 of 418 matches
Mail list logo