Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68843064
@viirya They are the same analytically but not numerically, for example,
~~~scala
scala math.log1p(math.exp(1000))
res2: Double = Infinity
scala
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/3906#issuecomment-68843161
If you are using a window operations, then previous batches data may need
to be access multiple times. If we dont put the data in WAL back in memory, the
system will have
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3869#issuecomment-68843458
[Test build #25096 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25096/consoleFull)
for PR 3869 at commit
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/3869#discussion_r22514948
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/linalg/VectorsSuite.scala ---
@@ -175,6 +177,33 @@ class VectorsSuite extends FunSuite {
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3907#discussion_r22515156
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -26,15 +26,7 @@ import scala.reflect.ClassTag
import
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/3907#discussion_r22515175
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -198,12 +190,19 @@ class HadoopRDD[K, V](
if
GitHub user OopsOutOfMemory opened a pull request:
https://github.com/apache/spark/pull/3909
[SPARK-5009][SQL][Bug FIx] allCaseVersions leads to stackoverflow.
Currently, we use `allCaseVersion` function to match all possible case
versions of `Keyword` that user passing into to
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3909#issuecomment-68845914
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68846927
OK. I understood it. I am wrong for LogLoss. Will revert it back.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3820#issuecomment-68847248
[Test build #25094 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25094/consoleFull)
for PR 3820 at commit
Github user scwf commented on a diff in the pull request:
https://github.com/apache/spark/pull/3431#discussion_r22516253
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/sources/ddl.scala ---
@@ -50,6 +50,7 @@ private[sql] class DDLParser extends StandardTokenParsers
with
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3895#issuecomment-68847210
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3895#issuecomment-68847206
[Test build #25095 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25095/consoleFull)
for PR 3895 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3820#issuecomment-68847251
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68847701
@mengxr @srowen New commit shoud make the computation more accurate.
---
If your project is set up for it, you can reply to this email and have your
reply appear on
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68847888
[Test build #25097 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25097/consoleFull)
for PR 3899 at commit
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68848217
@mengxr I noticed that you file a issue in
[#SPARK-5101](https://issues.apache.org/jira/browse/SPARK-5101). Do I need to
extract the codes in this pr to the place you
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/3906#issuecomment-68848565
I'm not sure if I understand correctly, in WindowedDStream we already
called persist to cache the parent DStream, internally it will call RDD's
persist. This will
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3869#issuecomment-68850284
[Test build #25096 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25096/consoleFull)
for PR 3869 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3869#issuecomment-68850290
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3820#issuecomment-68839674
[Test build #25094 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25094/consoleFull)
for PR 3820 at commit
GitHub user zhichao-li opened a pull request:
https://github.com/apache/spark/pull/3908
remove out of date statements since assembly is ON by default
From the git history, this behavior has been changed long time ago
255597:commit 666d93c294458cb056cb590eb11bb6cf979861e5
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/3820#issuecomment-68839779
Oh sorry, I just checked Impala's configuration and I think it is not what
it is here. I'll change my code to conform to that.
---
If your project is set up for it,
Github user zhichao-li closed the pull request at:
https://github.com/apache/spark/pull/3908
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3898#issuecomment-68840810
[Test build #25091 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25091/consoleFull)
for PR 3898 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3898#issuecomment-68840818
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user xiajunluan commented on the pull request:
https://github.com/apache/spark/pull/3906#issuecomment-68840845
Good catchï¼
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user viirya commented on a diff in the pull request:
https://github.com/apache/spark/pull/3869#discussion_r22513912
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/linalg/VectorsSuite.scala ---
@@ -175,6 +177,33 @@ class VectorsSuite extends FunSuite {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3895#issuecomment-68840820
[Test build #25095 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25095/consoleFull)
for PR 3895 at commit
Github user baishuo commented on the pull request:
https://github.com/apache/spark/pull/3895#issuecomment-68840957
I had modify some code and do test locally
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3906#issuecomment-68842492
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3906#issuecomment-68842485
[Test build #25092 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25092/consoleFull)
for PR 3906 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68842795
[Test build #25093 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25093/consoleFull)
for PR 3899 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68842807
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3906#issuecomment-68837031
[Test build #25092 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25092/consoleFull)
for PR 3906 at commit
Github user tsudukim commented on the pull request:
https://github.com/apache/spark/pull/3467#issuecomment-68837548
resolved conflicts!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/3871#discussion_r22512662
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/stat/impl/MultivariateGaussian.scala
---
@@ -17,23 +17,84 @@
package
GitHub user jeanlyn opened a pull request:
https://github.com/apache/spark/pull/3907
Origin/spark 5068
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/jeanlyn/spark origin/SPARK-5068
Alternatively you can review and apply these
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/3893#discussion_r22512543
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -701,7 +701,7 @@ private[spark] object Utils extends Logging {
}
}
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3907#issuecomment-68838456
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user jerryshao opened a pull request:
https://github.com/apache/spark/pull/3906
[SPARK-4999][Streaming] Change storeInBlockManager to false by default
Currently WAL-backed block is read out from HDFS and put into BlockManger
with storage level MEMORY_ONLY_SER by default,
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68837277
Hi @mengxr, Thanks for comment. I may be wrong, but I think that we should
branch based on `sign(label)` instead of `sign(label * margin)`?
Because according to
Github user pwendell commented on a diff in the pull request:
https://github.com/apache/spark/pull/3893#discussion_r22512468
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -701,7 +701,7 @@ private[spark] object Utils extends Logging {
}
}
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68837409
[Test build #25093 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25093/consoleFull)
for PR 3899 at commit
Github user tdas commented on the pull request:
https://github.com/apache/spark/pull/3801#issuecomment-68838737
I am merging this. Thanks @JoshRosen for this humongous effort!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as
Github user jeanlyn commented on the pull request:
https://github.com/apache/spark/pull/3907#issuecomment-68839176
Hi @marmbrus. Any suggestions?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
GitHub user jeanlyn reopened a pull request:
https://github.com/apache/spark/pull/3907
[spark 5068][SQL]fix bug query data when path doesn't exists
the issue is descript on [SPARK-5068]
(https://issues.apache.org/jira/browse/SPARK-5068) and this PR is fix the same
problem as
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3801
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user jeanlyn closed the pull request at:
https://github.com/apache/spark/pull/3907
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user saucam commented on the pull request:
https://github.com/apache/spark/pull/3888#issuecomment-68853317
Hi Michael,
Thanks for the feedback.
1. Yes it does not handle correlated queries. It definitely makes more
sense to convert correlated queries to
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/3907#discussion_r22518647
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -198,12 +190,19 @@ class HadoopRDD[K, V](
if
Github user jeanlyn commented on a diff in the pull request:
https://github.com/apache/spark/pull/3907#discussion_r22518690
--- Diff: core/src/main/scala/org/apache/spark/rdd/HadoopRDD.scala ---
@@ -26,15 +26,7 @@ import scala.reflect.ClassTag
import
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68854264
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68854259
[Test build #25097 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25097/consoleFull)
for PR 3899 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3866#issuecomment-68863199
[Test build #25098 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25098/consoleFull)
for PR 3866 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3866#issuecomment-68863194
[Test build #25098 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25098/consoleFull)
for PR 3866 at commit
Github user tgaloppo commented on a diff in the pull request:
https://github.com/apache/spark/pull/3871#discussion_r22521704
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/stat/impl/MultivariateGaussian.scala
---
@@ -17,23 +17,84 @@
package
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3866#issuecomment-68863202
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
GitHub user liancheng opened a pull request:
https://github.com/apache/spark/pull/3910
[SPARK-4296][SQL] Trims aliases when resolving and checking aggregate
expressions
This PR is a follow-up of PR #3248. We should not only trim `Alias` around
`GetField` but also all unnamed
Github user OopsOutOfMemory commented on the pull request:
https://github.com/apache/spark/pull/3909#issuecomment-68862582
@chenghao-intel
Any suggestions?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3866#issuecomment-68863686
[Test build #25099 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25099/consoleFull)
for PR 3866 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3910#issuecomment-68864117
[Test build #25100 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25100/consoleFull)
for PR 3910 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68864131
[Test build #25101 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25101/consoleFull)
for PR 3899 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3910#issuecomment-68864133
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3910#issuecomment-68864129
[Test build #25100 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25100/consoleFull)
for PR 3910 at commit
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68864189
@mengxr I think I already know what you meant to branch based on
`sign(label * margin)`. I made some modifications.
---
If your project is set up for it, you can reply
Github user rnowling commented on the pull request:
https://github.com/apache/spark/pull/2906#issuecomment-68870971
Thanks @mengxr @freeman-lab! :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3866#issuecomment-68871161
[Test build #25099 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25099/consoleFull)
for PR 3866 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3866#issuecomment-68871171
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68871848
[Test build #25101 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25101/consoleFull)
for PR 3899 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3899#issuecomment-68871860
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user watermen closed the pull request at:
https://github.com/apache/spark/pull/3237
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3890#issuecomment-68867190
[Test build #25102 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25102/consoleFull)
for PR 3890 at commit
GitHub user Lewuathe opened a pull request:
https://github.com/apache/spark/pull/3911
[SPARK-5019] Update GMM API to use MultivariateGaussian
GMM should have the public accessor for `MultivariateGaussian` model list.
With this api, gaussian parameters can be obtain through
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3911#issuecomment-68868859
[Test build #25103 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25103/consoleFull)
for PR 3911 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3911#issuecomment-68868966
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3911#issuecomment-68868964
[Test build #25103 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25103/consoleFull)
for PR 3911 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3911#issuecomment-68869950
[Test build #25104 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25104/consoleFull)
for PR 3911 at commit
Github user OopsOutOfMemory commented on a diff in the pull request:
https://github.com/apache/spark/pull/3431#discussion_r22525682
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/sources/ddl.scala ---
@@ -67,15 +68,30 @@ private[sql] class DDLParser extends
Github user OopsOutOfMemory commented on a diff in the pull request:
https://github.com/apache/spark/pull/3431#discussion_r22525893
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/sources/ddl.scala ---
@@ -83,10 +99,104 @@ private[sql] class DDLParser extends
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3911#issuecomment-68879451
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3911#issuecomment-68879439
[Test build #25104 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25104/consoleFull)
for PR 3911 at commit
Github user OopsOutOfMemory commented on a diff in the pull request:
https://github.com/apache/spark/pull/3431#discussion_r22526157
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/sources/ddl.scala ---
@@ -83,10 +99,104 @@ private[sql] class DDLParser extends
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3890#issuecomment-68875919
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3890#issuecomment-68875907
[Test build #25102 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25102/consoleFull)
for PR 3890 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3912#issuecomment-68916422
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user tgaloppo commented on a diff in the pull request:
https://github.com/apache/spark/pull/3871#discussion_r22545409
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/stat/impl/MultivariateGaussian.scala
---
@@ -17,23 +17,84 @@
package
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3651#issuecomment-68924192
Meh, let's just merge this in for now. The appending behavior won't be
confusing in Jenkins (since we clean) and I can only imagine this being
annoying for developers
Github user yhuai commented on a diff in the pull request:
https://github.com/apache/spark/pull/3431#discussion_r22548615
--- Diff: sql/core/src/main/scala/org/apache/spark/sql/sources/ddl.scala ---
@@ -50,6 +50,7 @@ private[sql] class DDLParser extends StandardTokenParsers
with
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3158#issuecomment-68928259
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3158#issuecomment-68928244
[Test build #25108 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25108/consoleFull)
for PR 3158 at commit
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/3871#discussion_r22542741
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/stat/impl/MultivariateGaussian.scala
---
@@ -17,23 +17,84 @@
package
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3158#issuecomment-68917772
[Test build #25108 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25108/consoleFull)
for PR 3158 at commit
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/3651#issuecomment-68924468
Merged to `master` (1.3.0). I'll handle the backport cherry-picks in a
little bit.
---
If your project is set up for it, you can reply to this email and have your
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/3651
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/3795#discussion_r22548022
--- Diff:
core/src/main/scala/org/apache/spark/rdd/OrderedRDDFunctions.scala ---
@@ -72,6 +72,8 @@ class OrderedRDDFunctions[K : Ordering : ClassTag,
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3638#issuecomment-68924990
[Test build #25107 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25107/consoleFull)
for PR 3638 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3638#issuecomment-68925001
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3912#issuecomment-68916402
[Test build #25106 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/25106/consoleFull)
for PR 3912 at commit
GitHub user sryza opened a pull request:
https://github.com/apache/spark/pull/3913
SPARK-5112. Expose SizeEstimator as a developer api
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/sryza/spark sandy-spark-5112
Alternatively
1 - 100 of 359 matches
Mail list logo