Github user viper-kun closed the pull request at:
https://github.com/apache/spark/pull/4307
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4344#issuecomment-72779281
LGTM too
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/3655#issuecomment-72784543
ok to test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user WangTaoTheTonic commented on the pull request:
https://github.com/apache/spark/pull/4350#issuecomment-72785278
retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4350#issuecomment-72785441
[Test build #26713 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26713/consoleFull)
for PR 4350 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4258#issuecomment-72786916
[Test build #26707 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26707/consoleFull)
for PR 4258 at commit
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/4176#issuecomment-72786951
I wonder if stopping the process is the best solution.
If there is only one illegal entry in a last line, we need to re-try
loading a whole file,
which is
Github user kul commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72786838
@marmbrus Thanks for review!
Rebased against master and sqashed in a new commit renaming
`schemaRDDOperations` to now more aptly called `dataFrameRDDOperations`.
---
If
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4258#issuecomment-72786922
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user scwf opened a pull request:
https://github.com/apache/spark/pull/4354
[SPARK-5583][SQL][WIP] Support unique join in hive context
Support unique join in hive context, the basic idea is transform unique
join into outer join + filter in spark sql:
FROM UNIQUEJOIN
GitHub user yhuai opened a pull request:
https://github.com/apache/spark/pull/4355
[SQL] Use HiveContext's sessionState in
HiveMetastoreCatalog.hiveDefaultTableFilePath
`client.getDatabaseCurrent` uses SessionState's local variable which can be
an issue.
You can merge this pull
Github user freeman-lab commented on the pull request:
https://github.com/apache/spark/pull/3803#issuecomment-72793042
Thanks for the detailed look @tdas! Think I addressed both nits.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4289#issuecomment-7280
In general I think the change looks reasonable to me, and we'd better use
the Hive `ObjectConverter` directly, and some of the code can be cleaner.
---
If your
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72779779
Why did you choose the parameters metadata.broker.list and the
bootstrap.servers as the required kafka params? I looked at the Kafka docs,
and it says that for consumers,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4258#issuecomment-72780813
[Test build #26707 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26707/consoleFull)
for PR 4258 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72782334
[Test build #26701 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26701/consoleFull)
for PR 3798 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72782343
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/4066#issuecomment-72782219
After some more thought and testing, I don't know if it's safe to ignore
task failures that are due to commits being denied, since doing so risks
infinite rescheduling
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4147#issuecomment-72782236
[Test build #576 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/NewSparkPullRequestBuilder/576/consoleFull)
for PR 4147 at commit
Github user zsxwing commented on a diff in the pull request:
https://github.com/apache/spark/pull/3642#discussion_r24060542
--- Diff:
graphx/src/test/scala/org/apache/spark/graphx/lib/ShortestPathsSuite.scala ---
@@ -40,7 +40,7 @@ class ShortestPathsSuite extends FunSuite with
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-72786346
[Test build #26716 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26716/consoleFull)
for PR 4216 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-72786347
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/4233#discussion_r24061275
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/classification/LogisticRegressionSuite.scala
---
@@ -459,7 +461,41 @@ class LogisticRegressionSuite
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4062#issuecomment-72786283
[Test build #26715 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26715/consoleFull)
for PR 4062 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4062#issuecomment-72791305
[Test build #26715 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26715/consoleFull)
for PR 4062 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-72791644
[Test build #26717 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26717/consoleFull)
for PR 4216 at commit
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4289#discussion_r24057600
--- Diff:
sql/hive/src/main/scala/org/apache/spark/sql/hive/TableReader.scala ---
@@ -315,9 +335,23 @@ private[hive] object HadoopTableReader extends
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/4147#issuecomment-72775916
@kayousterhout done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72775833
[Test build #26701 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26701/consoleFull)
for PR 3798 at commit
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72777123
Ohh I meant createStream -- createDirectStream. I would have preferred
something like createReceiverLessStream but thats a mouthful. I think direct
is something that comes
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3642#issuecomment-72777210
[Test build #26704 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26704/consoleFull)
for PR 3642 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72778614
[Test build #26706 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26706/consoleFull)
for PR 3798 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4351#issuecomment-72778536
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user koeninger commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72780349
High level consumers connect to ZK.
Simple consumers (which is what this is using) connect to brokers directly
instead. See
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4345#issuecomment-72781888
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/1767#issuecomment-72782993
What't the status of this patch?
If possibly merged into the master, I'll refactor the codes and add unit
tests.
---
If your project is set up for it, you can reply
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72784748
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72784745
[Test build #26706 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26706/consoleFull)
for PR 3798 at commit
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/3655#issuecomment-72784728
@harishreedharan This begs a higher level questions of whether the write
ahead log (which is the probably component to fail) should have its own retries
independent of the
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-72786661
[Test build #26717 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26717/consoleFull)
for PR 4216 at commit
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72787965
I think the simplest solution is to assign zookeeper.connect. But you are
assigning it in KafkaCluster lines 338 - 345. So why is this warning being
thrown?
---
If your
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/3642#issuecomment-72788058
Ok I'm going to merge this. Thanks for working on it.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4350#issuecomment-72789745
[Test build #26713 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26713/consoleFull)
for PR 4350 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4350#issuecomment-72789756
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3655#issuecomment-72789875
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72789850
Hi @tdas , should we add a example to show users how to use this new Kafka
API correctly?
---
If your project is set up for it, you can reply to this email and have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4355#issuecomment-72790535
[Test build #26724 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26724/consoleFull)
for PR 4355 at commit
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72791220
Holy crap! Dont bother about this at all. This can wait. I hope everything
is okay. Take care and all the best!
On Feb 3, 2015 8:45 PM, Cody Koeninger
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4062#issuecomment-72791312
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4354#issuecomment-72792534
Do you mind adding more inline comment? My worry is just complexity. If
nobody uses this, it's going to be a bunch of code there that for the sake of
supporting a thing in
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/4348#issuecomment-72794234
This select() and filter() in Python do not support yet
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/4348#discussion_r24063817
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameImpl.scala
---
@@ -179,10 +179,20 @@ private[sql] class DataFrameImpl protected[sql](
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3803#discussion_r24064149
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -671,7 +674,11 @@ class SparkContext(config: SparkConf) extends Logging
with
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4350#issuecomment-72778202
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4350#issuecomment-72778196
[Test build #26699 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26699/consoleFull)
for PR 4350 at commit
Github user koeninger commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72779615
Yeah, there's a weird distinction in Kafka between simple consumers and
high level consumers in that they have a lot of common configuration
parameters, but one
Github user lianhuiwang commented on the pull request:
https://github.com/apache/spark/pull/4258#issuecomment-72780420
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/4171#issuecomment-72785026
Also could could update the python API as well?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/4171#issuecomment-72784927
Please add unit tests for this behavior! It should be in
StreamingContextSuite.
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user vanzin opened a pull request:
https://github.com/apache/spark/pull/4352
[SPARK-5582] [history] Ignore empty log directories.
Empty log directories are not useful at the moment, but if one ends
up showing in the log root, it breaks the code that checks for log
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4216#issuecomment-72786281
[Test build #26716 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26716/consoleFull)
for PR 4216 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72787085
[Test build #26718 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26718/consoleFull)
for PR 4243 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4348#issuecomment-72788843
[Test build #26723 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26723/consoleFull)
for PR 4348 at commit
Github user koeninger commented on the pull request:
https://github.com/apache/spark/pull/3798#issuecomment-72790044
The warning is for metadata.broker.list, since its not expected by the
existing ConsumerConfig (its used by other config classes)
Couldn't get subclassing
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4066#issuecomment-72790079
[Test build #26712 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26712/consoleFull)
for PR 4066 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4066#issuecomment-72790083
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4352#issuecomment-72790111
[Test build #26711 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26711/consoleFull)
for PR 4352 at commit
Github user maropu commented on the pull request:
https://github.com/apache/spark/pull/4273#issuecomment-72790567
How about adding a new configuration, e.g.,
spark.graphx.pregel.checkpoint.interval in SparkConf?
---
If your project is set up for it, you can reply to this email and
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4348#issuecomment-72796410
[Test build #26723 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26723/consoleFull)
for PR 4348 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4348#issuecomment-72796417
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user davies opened a pull request:
https://github.com/apache/spark/pull/4351
[WIP] [SPARK-5577] Python udf for DataFrame
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/davies/spark python_udf
Alternatively you can
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4066#issuecomment-72780049
[Test build #26700 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26700/consoleFull)
for PR 4066 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4147#issuecomment-72782830
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4147#issuecomment-72782813
[Test build #26702 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26702/consoleFull)
for PR 4147 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3642#issuecomment-72783410
[Test build #26704 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26704/consoleFull)
for PR 3642 at commit
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/3631#issuecomment-72784310
Aah cool. However 0.8.1 and 0.8.2 have pretty big changes between them, so
lets merge this for the next release. We are already doing a lot of
experimental Kafka stuff in
Github user medale commented on the pull request:
https://github.com/apache/spark/pull/4315#issuecomment-72785613
The problem was that the Spark project hive-exec 0.13.1a depends on
```
dependency
groupIdorg.apache.avro/groupId
artifactIdavro-mapred/artifactId
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/3803#discussion_r24061014
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -671,7 +674,11 @@ class SparkContext(config: SparkConf) extends Logging
with
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4354#issuecomment-72788451
[Test build #26721 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26721/consoleFull)
for PR 4354 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4354#issuecomment-72792001
[Test build #26722 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26722/consoleFull)
for PR 4354 at commit
Github user freeman-lab commented on a diff in the pull request:
https://github.com/apache/spark/pull/3803#discussion_r24063184
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/api/java/JavaStreamingContext.scala
---
@@ -210,6 +211,20 @@ class
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72792070
[Test build #26718 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26718/consoleFull)
for PR 4243 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4243#issuecomment-72792072
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4354#issuecomment-72792006
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user scwf commented on the pull request:
https://github.com/apache/spark/pull/4354#issuecomment-72792978
It seems this is hive specified syntax as far as i know...
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user freeman-lab commented on a diff in the pull request:
https://github.com/apache/spark/pull/3803#discussion_r24063473
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -671,7 +674,11 @@ class SparkContext(config: SparkConf) extends Logging
with
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/4351#discussion_r24063741
--- Diff: python/pyspark/sql.py ---
@@ -2263,18 +2263,6 @@ def subtract(self, other):
return DataFrame(getattr(self._jdf,
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/4348#discussion_r24063992
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/DataFrameImpl.scala
---
@@ -179,10 +179,20 @@ private[sql] class DataFrameImpl protected[sql](
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4171#issuecomment-72796748
[Test build #26726 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26726/consoleFull)
for PR 4171 at commit
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/4171#issuecomment-72797695
LGTM, will merge when tests pass.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user pwendell commented on the pull request:
https://github.com/apache/spark/pull/4344#issuecomment-72798456
Jenkins, test this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4289#discussion_r24057968
--- Diff:
sql/hive/v0.12.0/src/main/scala/org/apache/spark/sql/hive/Shim12.scala ---
@@ -242,6 +242,11 @@ private[hive] object HiveShim {
}
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4066#issuecomment-72777921
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user kayousterhout commented on the pull request:
https://github.com/apache/spark/pull/4147#issuecomment-72777970
LGTM; I'll merge this as soon as tests pass. @tdas @pwendell this is fine
with me to merge into 1.2 (although I realize it won't make it until 1.2.2);
does that
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4066#issuecomment-72777913
[Test build #26695 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26695/consoleFull)
for PR 4066 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4068#issuecomment-72778669
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4349#issuecomment-72778764
[Test build #26696 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26696/consoleFull)
for PR 4349 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4349#issuecomment-72778768
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4068#issuecomment-72778662
[Test build #26697 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/26697/consoleFull)
for PR 4068 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4066#issuecomment-72780055
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
1 - 100 of 684 matches
Mail list logo