GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/5034
[SPARK-6315] [SQL] Also tries the case class string parser while reading
Parquet schema
When writing Parquet files, Spark 1.1.x persists the schema string with the
result of
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4994#issuecomment-81072659
I am not so sure why you think it's incorrect, can you add a unit test to
describe it does break something previously?
---
If your project is set up for it, you
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/5034#discussion_r26448690
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/parquet/newParquet.scala ---
@@ -672,7 +672,11 @@ private[sql] object ParquetRelation2 {
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/5017#discussion_r26448827
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -394,10 +394,16 @@ case class Cast(child: Expression,
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4361
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5017#issuecomment-81126709
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/5017#discussion_r26448959
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -394,10 +394,17 @@ case class Cast(child:
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/3895#discussion_r26449025
--- Diff:
sql/hive/v0.13.1/src/main/scala/org/apache/spark/sql/hive/Shim13.scala ---
@@ -297,7 +297,7 @@ private[hive] object HiveShim {
def
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/4940#issuecomment-81130668
I think it is because the codes you refer to access the element directly
with array index. If the ordinsl is not valid, runtime exception will be
thrown. But for the
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5032#issuecomment-80943514
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/5032#issuecomment-80943719
ok to test
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user liorchaga commented on the pull request:
https://github.com/apache/spark/pull/4998#issuecomment-80944189
@srowen , the 1.2 - 2.x bridge should work (I would verify this today).
But keep in mind it would require migrating log4j configuration to 2.x. Are we
sure we want to
GitHub user Lewuathe opened a pull request:
https://github.com/apache/spark/pull/5033
[SPARK-6336] LBFGS should document what convergenceTol means
LBFGS uses convergence tolerance. This value should be written document as
an argument.
You can merge this pull request into a Git
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5032
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/5035#issuecomment-81018838
? is it submitted mistakenly?
would you mind closing it?
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5029#issuecomment-81130219
[Test build #28629 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28629/consoleFull)
for PR 5029 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5032#issuecomment-80983327
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user fcriscuo opened a pull request:
https://github.com/apache/spark/pull/5035
Branch 1.3
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/apache/spark branch-1.3
Alternatively you can review and apply these changes as
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5035#issuecomment-81013649
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5034#issuecomment-81052712
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5034#issuecomment-81052654
[Test build #28627 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28627/consoleFull)
for PR 5034 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4940#issuecomment-81089066
See
https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/rows.scala#L67
Usually we don't the the
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5033#discussion_r26448937
--- Diff: docs/mllib-optimization.md ---
@@ -203,6 +203,9 @@ regularization, as well as L2 regularizer.
recommended.
* `maxNumIterations` is the
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/5017#issuecomment-81129086
Unrelated failure. retest it please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5032#issuecomment-80983323
[Test build #28624 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28624/consoleFull)
for PR 5032 at commit
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/5017#discussion_r26448335
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -394,10 +394,16 @@ case class Cast(child:
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/5017#discussion_r26448848
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -394,10 +394,17 @@ case class Cast(child:
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/5034#discussion_r26449048
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/parquet/newParquet.scala ---
@@ -672,7 +672,11 @@ private[sql] object ParquetRelation2 {
Github user liancheng commented on a diff in the pull request:
https://github.com/apache/spark/pull/5034#discussion_r26449101
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/parquet/newParquet.scala ---
@@ -672,7 +672,11 @@ private[sql] object ParquetRelation2 {
Github user CodingCat commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-81021347
thanks @squito
I updated that again
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/4994#issuecomment-81108144
Because `updateProjection` is the projection between `child.output` and the
aggregations `updateExpressions`. Its `inputSchema` should be just
`child.output`. It is
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5035#issuecomment-81118386
Mind closing this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/5017#discussion_r26448871
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -394,10 +394,17 @@ case class Cast(child:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5030#issuecomment-81141091
[Test build #28630 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28630/consoleFull)
for PR 5030 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-80998056
[Test build #28626 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28626/consoleFull)
for PR 2851 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5033#issuecomment-81008343
**[Test build #28625 timed
out](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28625/consoleFull)**
for PR 5033 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5033#issuecomment-81008356
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5034#issuecomment-81008353
[Test build #28627 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28627/consoleFull)
for PR 5034 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-81020599
[Test build #28626 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28626/consoleFull)
for PR 2851 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/2851#issuecomment-81020626
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/3895#discussion_r26448485
--- Diff:
sql/hive/v0.13.1/src/main/scala/org/apache/spark/sql/hive/Shim13.scala ---
@@ -297,7 +297,7 @@ private[hive] object HiveShim {
def
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4361#issuecomment-81112451
You're right, SPARK-3619 was merged into 1.3 too. This should follow as
well. Will do.
---
If your project is set up for it, you can reply to this email and have your
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/5017#issuecomment-81112499
With new commit.
Conducting 100 the `struct casting` in `ExpressionEvaluationSuite`:
before pr: 59.149s
after pr: 47.243s
Conducting
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/5017#discussion_r26448902
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/Cast.scala
---
@@ -394,10 +394,17 @@ case class Cast(child: Expression,
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4998#issuecomment-80938223
@tsliwowicz Have a read above, where I'm describing why this is more than
just a build profile. My principal concern is as I say above: if log4j 1.2 and
2 are mutually
Github user OopsOutOfMemory commented on the pull request:
https://github.com/apache/spark/pull/4530#issuecomment-80892574
OK, closed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
GitHub user dougb opened a pull request:
https://github.com/apache/spark/pull/5031
[SPARK-6207] [YARN] [SQL] Adds delegation tokens for metastore to conf.
Adds hive2-metastore delagation token to conf when running in securemode.
Without this change, running on YARN in cluster
Github user OopsOutOfMemory closed the pull request at:
https://github.com/apache/spark/pull/4530
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5031#issuecomment-80892420
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/5017#issuecomment-80935393
Simple benchmark conducting 100 the `struct casting` in
`ExpressionEvaluationSuite`:
before pr: 59.149s
after pr: 53.869s
Conducting 100 the
GitHub user OopsOutOfMemory opened a pull request:
https://github.com/apache/spark/pull/5032
[SPARK-6285][SQL]Remove ParquetTestData in SparkBuild.scala and in README.md
This is a following clean up PR for #5010
This will resolve issues when launching `hive/console` like below:
Github user liorchaga commented on the pull request:
https://github.com/apache/spark/pull/4998#issuecomment-80979001
log4j12-api bridge is working properly.
Our executor code is using log4j 1.2.7, and this jar is included in the
jars provided to sparkContext. The spark
Github user tsliwowicz commented on the pull request:
https://github.com/apache/spark/pull/4998#issuecomment-80878911
@srowen We don't know of an option to run side by side with two log4j
versions. It conflicts on both slf4j and log4j classes. In any case, I believe
it won't create a
Github user Lewuathe commented on the pull request:
https://github.com/apache/spark/pull/3636#issuecomment-80917580
@jkbradley I think this patch was updated to use relative convergence
tolerance. Are there any other point we should fix about convergence tolerance?
---
If your
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/3895#issuecomment-80945233
@marmbrus @baishuo My last merging operation happened to fail because of
network issue, and then I saw Michael's comment. Created a baishuo/spark#2 per
Michael's
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5032#issuecomment-80946011
[Test build #28624 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28624/consoleFull)
for PR 5032 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4998#issuecomment-80947530
Yeah that's what I am hoping might work, to remove log4j 1.2 and replace
with log4j 2 + 1.2-to-2 bridge, and also use the slfj4-to-log4j2 bridge, and
update Spark itself
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5033#issuecomment-80970195
[Test build #28625 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28625/consoleFull)
for PR 5033 at commit
Github user OopsOutOfMemory commented on the pull request:
https://github.com/apache/spark/pull/5032#issuecomment-80944151
ok, thanks @liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user liancheng commented on the pull request:
https://github.com/apache/spark/pull/5032#issuecomment-80943973
This LGTM. Thanks for fixing this! I can merge it after it passes Jenkins
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5030#issuecomment-80948174
Yeah, sounds like that changed the API at a binary level. It'll probably
need a new method in Scala too.
---
If your project is set up for it, you can reply to this
Github user MechCoder commented on the pull request:
https://github.com/apache/spark/pull/4906#issuecomment-81177042
But are accessing values from the broadcasted variables expensive enough
that placing the lines under `mapPartitions` give enough benefit?
---
If your project is set
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/3636#issuecomment-81235479
Oh, no, by relative convergence tolerance, we mean testing for:
```
abs(previous_objective - current_object) convergenceTol *
abs(initial_objective)
```
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5030#issuecomment-81191825
[Test build #28632 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28632/consoleFull)
for PR 5030 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5030#issuecomment-81216274
[Test build #28632 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28632/consoleFull)
for PR 5030 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5030#issuecomment-81216293
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4733#issuecomment-81186864
[Test build #28631 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28631/consoleFull)
for PR 4733 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/5030#issuecomment-81190175
jenkins, retest this please
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4733#issuecomment-81212201
[Test build #28631 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28631/consoleFull)
for PR 4733 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4733#issuecomment-81212223
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5029#issuecomment-81177287
[Test build #28629 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28629/consoleFull)
for PR 5029 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5029#issuecomment-81177306
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5030#issuecomment-81189838
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5030#issuecomment-81189793
**[Test build #28630 timed
out](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28630/consoleFull)**
for PR 5030 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4944#issuecomment-81276944
[Test build #28633 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28633/consoleFull)
for PR 4944 at commit
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4940#issuecomment-81307940
If in that case,does it mean bugs in codegen?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4944#issuecomment-81248275
[Test build #28633 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28633/consoleFull)
for PR 4944 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/4944#issuecomment-81267408
I've updated this with a new commit that adds another layer of `try-catch`
blocks to handle errors when reading the status code. I also made a slight
change to how we
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4944#issuecomment-81276962
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/5024#issuecomment-81339589
LGTM. A more general thinking maybe not relevant to this PR, if some
configurations are changed after resubmitting the application, how to handle
this, to choose the
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/5019#issuecomment-81338075
Hi @vanzin @davies , seems this change hides the `pyspark-shell` into the
python code, so basically if we want to add some additional arguments with
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/5018#discussion_r26456247
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -371,6 +376,12 @@ class
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/5018#discussion_r26456475
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -340,7 +341,11 @@ class
Github user debasish83 commented on the pull request:
https://github.com/apache/spark/pull/5005#issuecomment-81358204
I compared first Breeze NNLS and mllib NNLS as it is simpler.
The algorithms are same as implemented by @coderxiang. I migrated it to
Breeze as it is a local
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4885#discussion_r26457577
--- Diff:
sql/hive-thriftserver/src/test/scala/org/apache/spark/sql/hive/thriftserver/HiveThriftServer2Suites.scala
---
@@ -245,15 +377,22 @@
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4885#discussion_r26457610
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -103,9 +105,11 @@ class SQLContext(@transient val sparkContext:
Github user Leolh commented on the pull request:
https://github.com/apache/spark/pull/4898#issuecomment-81372331
I'm so sorry for that I could't pull the lasted code from the master branch
for some reason,so I delete my old repository... If possible please test the
new pull request
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4885#issuecomment-81376089
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4885#issuecomment-81376063
[Test build #28636 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28636/consoleFull)
for PR 4885 at commit
Github user harishreedharan commented on the pull request:
https://github.com/apache/spark/pull/4964#issuecomment-81383011
@tdas - What do you think about adding a short-circuit for the scenario
when `concurrentJobs == 1`? That would basically combine the current state of
this PR
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/4491#discussion_r26455671
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockObjectWriter.scala ---
@@ -123,12 +133,30 @@ private[spark] class DiskBlockObjectWriter(
Github user CodingCat commented on a diff in the pull request:
https://github.com/apache/spark/pull/4491#discussion_r26455889
--- Diff:
core/src/main/scala/org/apache/spark/storage/BlockObjectWriter.scala ---
@@ -123,12 +133,30 @@ private[spark] class DiskBlockObjectWriter(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5005#issuecomment-81356887
[Test build #28635 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28635/consoleFull)
for PR 5005 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5005#issuecomment-81356884
[Test build #28635 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28635/consoleFull)
for PR 5005 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5005#issuecomment-81356890
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/4885#discussion_r26457655
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/SQLContext.scala ---
@@ -103,9 +105,11 @@ class SQLContext(@transient val sparkContext:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5005#issuecomment-81377488
[Test build #28637 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28637/consoleFull)
for PR 5005 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5005#issuecomment-81377572
[Test build #28637 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28637/consoleFull)
for PR 5005 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5005#issuecomment-81377588
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4885#issuecomment-81403533
[Test build #28638 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28638/consoleFull)
for PR 4885 at commit
1 - 100 of 119 matches
Mail list logo