Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4708#issuecomment-75681861
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4644#issuecomment-75687204
[Test build #27878 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27878/consoleFull)
for PR 4644 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4738#issuecomment-75687188
[Test build #27877 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27877/consoleFull)
for PR 4738 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4738#issuecomment-75694188
[Test build #27877 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27877/consoleFull)
for PR 4738 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4738#issuecomment-75694192
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4644#issuecomment-75694306
[Test build #27876 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27876/consoleFull)
for PR 4644 at commit
Github user hellertime commented on the pull request:
https://github.com/apache/spark/pull/3881#issuecomment-75695177
Ok. JIRA ticket has been filed, noted ticket in the title of this PR.
Happy to add the additional comment.
---
If your project is set up for it, you can
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3881#issuecomment-75698769
Wow, very thorough review :)
Let me whitelist this PR with Jenkins so that it gets tested when you push
changes.
---
If your project is set up for it, you
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4710
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4644#issuecomment-75682662
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4644#issuecomment-75682655
[Test build #27869 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27869/consoleFull)
for PR 4644 at commit
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3881#discussion_r25227849
--- Diff: sbin/spark-daemon.sh ---
@@ -141,24 +151,36 @@ case $option in
rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*'
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/4723#issuecomment-75698289
Yes, python version of createRDD would be great.
BTW, is it possible to mark these experimental in Python @davies? The
Scala, Java AP is experimental as of now.
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4688#issuecomment-75699287
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/4736#discussion_r25228631
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/SqlParser.scala ---
@@ -40,7 +40,7 @@ class SqlParser extends AbstractSparkSQLParser {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4688#issuecomment-75699283
[Test build #27879 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27879/consoleFull)
for PR 4688 at commit
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4681#issuecomment-75681561
Merged to branch-1.2
@davies can you close this issue?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/4706#issuecomment-75689302
@mengxr I see. Thanks.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4688#issuecomment-75693115
[Test build #27879 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27879/consoleFull)
for PR 4688 at commit
Github user mccheah commented on a diff in the pull request:
https://github.com/apache/spark/pull/4106#discussion_r25227412
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -193,17 +193,21 @@ class HadoopRDD[K, V](
override def getPartitions:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4716#issuecomment-75695983
[Test build #27880 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27880/consoleFull)
for PR 4716 at commit
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3881#discussion_r25227794
--- Diff: sbin/spark-daemon.sh ---
@@ -141,24 +151,36 @@ case $option in
rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*'
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3881#discussion_r25227801
--- Diff: sbin/spark-daemon.sh ---
@@ -141,24 +151,36 @@ case $option in
rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*'
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3881#discussion_r25227789
--- Diff: sbin/spark-daemon.sh ---
@@ -141,24 +151,36 @@ case $option in
rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*'
Github user judynash commented on the pull request:
https://github.com/apache/spark/pull/4644#issuecomment-75699068
Fixed the merge issue from earlier pushes. Test has passed on the latest
clean push. Addressed feedbacks. Thanks Owen for reviewing.
---
If your project is set up for
Github user JoshRosen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4021#discussion_r25215797
--- Diff: core/src/main/scala/org/apache/spark/Accumulators.scala ---
@@ -320,7 +334,13 @@ private[spark] object Accumulators {
def add(values:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4644#issuecomment-75670850
[Test build #27869 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27869/consoleFull)
for PR 4644 at commit
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4684
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user davies closed the pull request at:
https://github.com/apache/spark/pull/4681
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/4681#issuecomment-75682235
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user brkyvz commented on the pull request:
https://github.com/apache/spark/pull/4737#issuecomment-75684685
LGTM. I thought it would be nice to show how people can go back to
`RowMatrix` and call SVD after some operations, but we can keep it simple. I
apologize for missing the
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/3881#issuecomment-75686553
Noticed that indentation of `spark-daemon.sh` is a bit messy. But in
general, we use 2-space indentation in shell scripts. Shall we at least use
2-space indentation
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/4719#issuecomment-75687550
@marmbrus I agree on that is the purpose of turning off eager analysis. But
how we can do when we want to debug the queries of these commands and queries
with side
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4737#issuecomment-75690101
[Test build #27873 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27873/consoleFull)
for PR 4737 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4737#issuecomment-75690108
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/2765#discussion_r25226058
--- Diff:
streaming/src/test/java/org/apache/spark/streaming/JavaAPISuite.java ---
@@ -1739,7 +1739,11 @@ public Integer call(String s) throws Exception {
Github user ilganeli commented on the pull request:
https://github.com/apache/spark/pull/4708#issuecomment-75691936
What happened with this test? Seems like it failed to fetch from git?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3881#discussion_r25227834
--- Diff: sbin/spark-daemon.sh ---
@@ -141,24 +151,36 @@ case $option in
rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*'
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3881#discussion_r25227854
--- Diff: sbin/spark-daemon.sh ---
@@ -141,24 +151,36 @@ case $option in
rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*'
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3881#issuecomment-75697392
Sorry to spam this PR. Most of the word splitting issues are in code this
PR didn't introduce, but since we're touching it here it's good to fix it up.
Hope it's not
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/4723#issuecomment-75706796
We can mark it as experimental by
```
::note: experimental
```
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4729#issuecomment-75709019
[Test build #27883 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27883/consoleFull)
for PR 4729 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4712#issuecomment-75709018
[Test build #27884 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27884/consoleFull)
for PR 4712 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4644#issuecomment-75677342
[Test build #27867 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27867/consoleFull)
for PR 4644 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4644#issuecomment-75677364
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4735#issuecomment-75681137
[Test build #27868 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27868/consoleFull)
for PR 4735 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4735#issuecomment-75681145
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4737#issuecomment-75683443
[Test build #27873 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27873/consoleFull)
for PR 4737 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4736#issuecomment-75683446
[Test build #27874 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27874/consoleFull)
for PR 4736 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4738#issuecomment-75686338
[Test build #27875 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27875/consoleFull)
for PR 4738 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4688#issuecomment-75686362
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4688#issuecomment-75686353
[Test build #27872 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27872/consoleFull)
for PR 4688 at commit
Github user viirya closed the pull request at:
https://github.com/apache/spark/pull/4719
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/4719#issuecomment-75689024
okay, I got it. Thanks for explaining!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4719#issuecomment-75689063
No problem!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3881#discussion_r25227618
--- Diff: sbin/spark-daemon.sh ---
@@ -141,24 +151,36 @@ case $option in
rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*'
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232685
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232690
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232686
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232692
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4740#issuecomment-75709407
[Test build #27885 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27885/consoleFull)
for PR 4740 at commit
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232704
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4741#issuecomment-75711448
[Test build #27886 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27886/consoleFull)
for PR 4741 at commit
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232703
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4739#issuecomment-75711745
[Test build #27882 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27882/consoleFull)
for PR 4739 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4739#issuecomment-75711751
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232681
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232679
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232691
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232693
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232697
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232695
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232699
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4504#discussion_r25232683
--- Diff: mllib/src/main/scala/org/apache/spark/ml/feature/Tokenizer.scala
---
@@ -39,3 +39,66 @@ class Tokenizer extends UnaryTransformer[String,
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/4504#issuecomment-75710113
@aborsu985 I made a pass on the code. Besides my inline comments, please
add a unit test. It would be better if you can also add a Java unit test.
Thanks!
---
If your
GitHub user tdas opened a pull request:
https://github.com/apache/spark/pull/4741
[Spark 5967] [UI] Correctly clean JobProgressListener.stageIdToActiveJobIds
Patch should be self-explanatory
@pwendell @JoshRosen
You can merge this pull request into a Git repository by running:
Github user tedyu commented on the pull request:
https://github.com/apache/spark/pull/4021#issuecomment-75666737
Not yet.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4732
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4719#issuecomment-75681431
I'm going to disagree here. I think that actions should always be eager.
The ability to turn of eager analysis is really just for developers that
want to see
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4688#issuecomment-75682923
[Test build #27872 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27872/consoleFull)
for PR 4688 at commit
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/4576#issuecomment-75684490
Ping
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4644#issuecomment-75694313
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3881#discussion_r25227674
--- Diff: sbin/spark-daemon.sh ---
@@ -141,24 +151,36 @@ case $option in
rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*'
Github user nchammas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3881#discussion_r25227686
--- Diff: sbin/spark-daemon.sh ---
@@ -141,24 +151,36 @@ case $option in
rsync -a -e ssh --delete --exclude=.svn --exclude='logs/*'
Github user nchammas commented on the pull request:
https://github.com/apache/spark/pull/3881#issuecomment-75701190
On a separate, less important note, is there an easy way to add the new
command line parameter in a position-independent way?
Right now it looks like
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4739#issuecomment-75707484
[Test build #27882 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27882/consoleFull)
for PR 4739 at commit
Github user mingyukim commented on the pull request:
https://github.com/apache/spark/pull/4420#issuecomment-75658190
That's fine with us. I'm not entirely familiar with the branching model
here, but does that mean this will be merged into branch-1.3 after 1.3.0 is
released, and
Github user florianverhein commented on the pull request:
https://github.com/apache/spark/pull/4583#issuecomment-75659457
Have now launched clusters with and without trailing / too -- looks good.
ping @shivaram
---
If your project is set up for it, you can reply to this email
Github user judynash commented on a diff in the pull request:
https://github.com/apache/spark/pull/4644#discussion_r25213651
--- Diff: docs/monitoring.md ---
@@ -176,6 +176,7 @@ Each instance can report to zero or more _sinks_. Sinks
are contained in the
* `JmxSink`:
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4644#discussion_r25213794
--- Diff: docs/monitoring.md ---
@@ -176,6 +176,7 @@ Each instance can report to zero or more _sinks_. Sinks
are contained in the
* `JmxSink`: Registers
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4733#issuecomment-75661002
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4733#issuecomment-75660988
[Test build #27863 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27863/consoleFull)
for PR 4733 at commit
Github user brennonyork commented on the pull request:
https://github.com/apache/spark/pull/4733#issuecomment-75661998
@srowen is there any way to re-run this test without the MiMa tests?? The
entire point of this PR is to change the public methods even though they
wouldn't affect
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3590#discussion_r25214428
--- Diff:
yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnClientSchedulerBackend.scala
---
@@ -78,11 +79,25 @@ private[spark] class
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/4720#issuecomment-75663170
If it won't take too much time, I suppose it would be cleaner to add the
foreground flag and merge that other PR into all of the maintenance branches.
We can always
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4734#issuecomment-75663501
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4734#issuecomment-75663470
[Test build #27864 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/27864/consoleFull)
for PR 4734 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3881#issuecomment-75663481
Hi @hellertime,
A really good use-case for this PR has just cropped up at #4720; I think
that the `--foreground` flag will make it easier to write tests for
Github user judynash commented on a diff in the pull request:
https://github.com/apache/spark/pull/4644#discussion_r25215176
--- Diff: docs/monitoring.md ---
@@ -176,6 +176,7 @@ Each instance can report to zero or more _sinks_. Sinks
are contained in the
* `JmxSink`:
201 - 299 of 299 matches
Mail list logo