GitHub user mengxr opened a pull request:
https://github.com/apache/spark/pull/4390
[SPARK-5604] remove checkpointDir from LDA
`checkpointDir` is a Spark global configuration. Users should set it
outside LDA. This PR also hides some methods under `private[clustering] object
LDA`,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4068#issuecomment-73009841
[Test build #26833 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26833/consoleFull)
for PR 4068 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4389#issuecomment-73010186
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4389#issuecomment-73010179
[Test build #26830 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26830/consoleFull)
for PR 4389 at commit
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/4356#discussion_r24148995
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -207,13 +217,22 @@ class HadoopTableReader(
* If `filterOpt`
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4347#issuecomment-73008387
[Test build #26832 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26832/consoleFull)
for PR 4347 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4347#issuecomment-73009123
[Test build #26832 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26832/consoleFull)
for PR 4347 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4347#issuecomment-73009125
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4390#issuecomment-73010318
[Test build #26834 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26834/consoleFull)
for PR 4390 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4384#issuecomment-73180459
[Test build #26888 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26888/consoleFull)
for PR 4384 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4384#issuecomment-73180462
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-73182193
[Test build #26892 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26892/consoleFull)
for PR 4216 at commit
Github user ksakellis commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-73182058
Jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4384#issuecomment-73182394
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4416#issuecomment-73182404
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4384#issuecomment-73182388
[Test build #26891 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26891/consoleFull)
for PR 4384 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4416#issuecomment-73182398
[Test build #26890 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26890/consoleFull)
for PR 4416 at commit
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-73183285
@andrewor14 okay I think this time you are causing the test failure :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4373#issuecomment-73183514
[Test build #26893 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26893/consoleFull)
for PR 4373 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4373#issuecomment-73183518
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r24223212
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockObjectWriter.scala ---
@@ -193,12 +194,11 @@ private[spark] class DiskBlockObjectWriter(
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r24223198
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -358,5 +374,12 @@ class ShuffleWriteMetrics extends Serializable {
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-73187723
LGTM.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4419#issuecomment-73188654
[Test build #26901 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26901/consoleFull)
for PR 4419 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-73188670
[Test build #26902 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26902/consoleFull)
for PR 4216 at commit
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r24225336
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -334,6 +342,14 @@ class ShuffleReadMetrics extends Serializable {
*
Github user viper-kun commented on the pull request:
https://github.com/apache/spark/pull/4418#issuecomment-73190743
@srowen
ok. if it is useful later; we should change it like this
def hasShutdownDeleteTachyonDir(file: TachyonFile): Boolean = {
val absolutePath =
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/4401#issuecomment-73197096
Did you follow the `docs/README.md`?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4407
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-73198179
[Test build #26903 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26903/consoleFull)
for PR 4216 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-73198185
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3637
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-73178748
[Test build #26892 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26892/consoleFull)
for PR 4216 at commit
Github user florianverhein commented on a diff in the pull request:
https://github.com/apache/spark/pull/4385#discussion_r24221789
--- Diff: ec2/spark_ec2.py ---
@@ -145,6 +145,14 @@ def parse_args():
default=DEFAULT_SPARK_GITHUB_REPO,
help=Github repo
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4289#discussion_r24222130
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -315,9 +335,17 @@ private[hive] object HadoopTableReader extends
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4289#discussion_r24222165
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -315,9 +335,17 @@ private[hive] object HadoopTableReader extends
Github user shivaram commented on the pull request:
https://github.com/apache/spark/pull/4385#issuecomment-73185820
Thanks @florianverhein for the change - This is a pretty useful change as I
often modify these variables inline for my experiments.
@nchammas @JoshRosen could
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4415#issuecomment-73185836
[Test build #26898 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26898/consoleFull)
for PR 4415 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4419#issuecomment-73185880
[Test build #26899 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26899/consoleFull)
for PR 4419 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4419#issuecomment-73185833
[Test build #26899 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26899/consoleFull)
for PR 4419 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-73187137
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-73187135
[Test build #26897 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26897/consoleFull)
for PR 3249 at commit
Github user ksakellis commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r24223586
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockObjectWriter.scala ---
@@ -193,12 +194,11 @@ private[spark] class DiskBlockObjectWriter(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3486#issuecomment-73185071
[Test build #26894 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26894/consoleFull)
for PR 3486 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4382#issuecomment-73186905
@mallman this PR exactly aims to fix the bug you mentioned, and it passed
the tested in my local machine. However, I am still figuring out some of the
unit
Github user ksakellis commented on a diff in the pull request:
https://github.com/apache/spark/pull/4409#discussion_r24224468
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -78,7 +78,7 @@ private[spark] class ExecutorAllocationManager(
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4418#issuecomment-73187803
These do appear unused, at the moment, but what's the need to delete them?
They could plausibly be useful later; it's not completely useless code.
(Normally changes need
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-73189047
[Test build #26903 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26903/consoleFull)
for PR 4216 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4413#issuecomment-73177683
[Test build #26882 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26882/consoleFull)
for PR 4413 at commit
GitHub user lazyman500 opened a pull request:
https://github.com/apache/spark/pull/4417
[SPARK-5155] [PySpark]
add examples for PySpark
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/lazyman500/spark SPARK-5616
Alternatively
GitHub user viper-kun opened a pull request:
https://github.com/apache/spark/pull/4418
remove unused function
hasShutdownDeleteTachyonDir(file: TachyonFile) should use
shutdownDeleteTachyonPaths(not shutdownDeletePaths) to determine Whether
contain file. To solve it ,delete two
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4410#issuecomment-73178269
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3486#issuecomment-73178283
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4289#discussion_r24221959
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -264,15 +268,31 @@ private[hive] object HadoopTableReader extends
Github user florianverhein commented on a diff in the pull request:
https://github.com/apache/spark/pull/4385#discussion_r24224925
--- Diff: ec2/spark_ec2.py ---
@@ -1007,6 +1022,14 @@ def real_main():
print stderr, ebs-vol-num cannot be greater than 8
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-73196678
[Test build #26906 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26906/consoleFull)
for PR 4067 at commit
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/3637#issuecomment-73198344
LGTM. Merged into master and branch-1.3. Thanks everyone for the
discussion! @jkbradley We can remove mima excludes in another PR.
---
If your project is set up for it,
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3637#issuecomment-73177838
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4416#issuecomment-73177847
[Test build #26890 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26890/consoleFull)
for PR 4416 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3637#issuecomment-73177831
[Test build #26885 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26885/consoleFull)
for PR 3637 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4417#issuecomment-73178429
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4418#issuecomment-73178427
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4419#issuecomment-73181007
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4419#issuecomment-73181005
[Test build #26895 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26895/consoleFull)
for PR 4419 at commit
Github user deeppradhan commented on the pull request:
https://github.com/apache/spark/pull/3619#issuecomment-73182475
Is this for undirected graphs or directed graphs.
I ran this for directed graphs and my answers are not matching.
---
If your project is set up for it, you can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-73186608
[Test build #26896 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26896/consoleFull)
for PR 4067 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-73186614
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user mingyukim commented on the pull request:
https://github.com/apache/spark/pull/4420#issuecomment-73187536
This is following up @andrewor14's comments on #3656. It makes the
threshold and frequency configurable rather than completely removing them.
Please let me know if I
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4015#issuecomment-73192145
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/4407#issuecomment-73197053
Merged into master and branch-1.3.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4414#issuecomment-73179443
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4414#issuecomment-73179437
[Test build #26887 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26887/consoleFull)
for PR 4414 at commit
GitHub user mingyukim opened a pull request:
https://github.com/apache/spark/pull/4420
[SPARK-4808] Configurable spillable memory threshold + sampling rate
In the general case, Spillable's heuristic of checking for memory stress
on every 32nd item after 1000 items are read is
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r24224638
--- Diff: core/src/main/scala/org/apache/spark/executor/TaskMetrics.scala
---
@@ -358,5 +374,12 @@ class ShuffleWriteMetrics extends Serializable {
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r24224673
--- Diff: core/src/main/scala/org/apache/spark/ui/jobs/StagePage.scala ---
@@ -472,12 +512,12 @@ private[ui] class StagePage(parent: StagesTab)
extends
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r24224655
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockObjectWriter.scala ---
@@ -193,12 +194,11 @@ private[spark] class DiskBlockObjectWriter(
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/4067#discussion_r24225268
--- Diff:
core/src/main/scala/org/apache/spark/shuffle/hash/BlockStoreShuffleFetcher.scala
---
@@ -25,7 +25,7 @@ import org.apache.spark._
import
Github user Sephiroth-Lin commented on the pull request:
https://github.com/apache/spark/pull/4412#issuecomment-73190453
@srowen we run a process as a service which will not stop. In this service
process we will create SparkContext and run job and then stop it, because we
only call
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4415#issuecomment-73190540
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4415#issuecomment-73190533
[Test build #26898 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26898/consoleFull)
for PR 4415 at commit
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4415#issuecomment-73192625
Is it possible to add a unit test?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4415#issuecomment-73192723
(I understand unit test coverage for the optimizer is pretty low - but that
would be great to change increase)
---
If your project is set up for it, you can reply to this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4249#issuecomment-73195127
[Test build #26905 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26905/consoleFull)
for PR 4249 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4015#issuecomment-73195076
@marmbrus can you review the code for me?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-73178600
Thanks @JoshRosen good to know I'm not causing them.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4067#issuecomment-73182187
[Test build #26896 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26896/consoleFull)
for PR 4067 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-73182199
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user florianverhein commented on a diff in the pull request:
https://github.com/apache/spark/pull/4385#discussion_r24222907
--- Diff: ec2/spark_ec2.py ---
@@ -643,12 +654,14 @@ def setup_cluster(conn, master_nodes, slave_nodes,
opts, deploy_ssh_key):
# NOTE:
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-73184662
I did a quick pass, this is looking good, but there are some comments on
the JIRA worth addressing.
---
If your project is set up for it, you can reply to this email
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4420#issuecomment-73187302
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4410
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4015#issuecomment-73192132
[Test build #26900 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26900/consoleFull)
for PR 4015 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4419#issuecomment-73197604
[Test build #26901 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26901/consoleFull)
for PR 4419 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3249#issuecomment-73182749
[Test build #26897 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26897/consoleFull)
for PR 3249 at commit
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/4216#discussion_r24222835
--- Diff:
core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala ---
@@ -56,8 +55,16 @@ private[spark] class SparkSubmitArguments(args:
Github user florianverhein commented on a diff in the pull request:
https://github.com/apache/spark/pull/4385#discussion_r24222849
--- Diff: ec2/spark_ec2.py ---
@@ -643,12 +654,14 @@ def setup_cluster(conn, master_nodes, slave_nodes,
opts, deploy_ssh_key):
# NOTE:
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3486#issuecomment-73185077
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user florianverhein commented on the pull request:
https://github.com/apache/spark/pull/4385#issuecomment-73185215
Thanks for prompt feedback @nchammas. Much appreciated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4015#issuecomment-73186446
[Test build #26900 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26900/consoleFull)
for PR 4015 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4411#issuecomment-73186449
Ok but isn't it more straightforward to at last depend on the real servlet
API artifact? This is just Jetty's random copy. Maybe just fine or necessary
for a reason I
1 - 100 of 504 matches
Mail list logo