GitHub user tarfaa opened a pull request:
https://github.com/apache/spark/pull/5051
[docs][minor] Fixed sample code in SQLContext scaladoc
Error in the code sample of the `implicits` object in `SQLContext`.
You can merge this pull request into a Git repository by running:
$
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5052#issuecomment-81935826
[Test build #28670 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28670/consoleFull)
for PR 5052 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-81968390
[Test build #28679 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28679/consoleFull)
for PR 3074 at commit
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/5029#discussion_r26535444
--- Diff:
external/kafka/src/test/scala/org/apache/spark/streaming/kafka/ReliableKafkaStreamSuite.scala
---
@@ -68,10 +67,7 @@ class ReliableKafkaStreamSuite
Github user tnachen commented on the pull request:
https://github.com/apache/spark/pull/4960#issuecomment-81975220
@realoptimal you did indeed found a problem about roles, I only tried it
with seeing the framework registered with the right role and tasks launched,
but didn't try it
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5029#discussion_r26535552
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala
---
@@ -136,6 +135,7 @@ class InsertIntoHiveTableSuite extends
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5029#discussion_r26535518
--- Diff:
external/kafka/src/test/scala/org/apache/spark/streaming/kafka/ReliableKafkaStreamSuite.scala
---
@@ -68,10 +67,7 @@ class ReliableKafkaStreamSuite
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/216#issuecomment-81983258
@LIDIAgroup Sorry that I don't have enough bandwidth to review this PR.
Since there are unresolved performance issues, do you mind closing this PR for
now? I recommend
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5029#discussion_r26536957
--- Diff:
sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertIntoHiveTableSuite.scala
---
@@ -136,6 +135,7 @@ class InsertIntoHiveTableSuite extends
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/5019#issuecomment-81983845
OK, get it, thanks a lot.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user chenghao-intel closed the pull request at:
https://github.com/apache/spark/pull/4382
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user squito commented on the pull request:
https://github.com/apache/spark/pull/5052#issuecomment-82005866
@JoshRosen I couldn't think of anything, but to be honest I didn't really
rack my brain too hard since its just a developer util. I'm open to any
suggestions ...
---
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5055#issuecomment-82028025
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user tanyinyan opened a pull request:
https://github.com/apache/spark/pull/5055
[MLLib]SPARK-6348:Enable useFeatureScaling in SVMWithSGD
set useFeatureScaling true in SVMWithSGD, the problem describled in jira
(https://issues.apache.org/jira/browse/SPARK-6348)
You can merge
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4964#issuecomment-82034249
[Test build #28687 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28687/consoleFull)
for PR 4964 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4964#issuecomment-82034264
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4964#issuecomment-82033606
[Test build #28687 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28687/consoleFull)
for PR 4964 at commit
Github user debasish83 commented on a diff in the pull request:
https://github.com/apache/spark/pull/3098#discussion_r26545110
--- Diff:
examples/src/main/scala/org/apache/spark/examples/mllib/MovieLensALS.scala ---
@@ -18,14 +18,14 @@
package org.apache.spark.examples.mllib
Github user baishuo commented on the pull request:
https://github.com/apache/spark/pull/3895#issuecomment-82051247
had modify the Title of this PR @marmbrus @liancheng
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
GitHub user vanzin opened a pull request:
https://github.com/apache/spark/pull/5057
[SPARK-6372] [core] Propagate --conf to child processes.
And add unit test.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/vanzin/spark
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4961#issuecomment-82051582
[Test build #28689 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28689/consoleFull)
for PR 4961 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3895#issuecomment-82051855
[Test build #28690 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28690/consoleFull)
for PR 3895 at commit
Github user watermen commented on the pull request:
https://github.com/apache/spark/pull/5045#issuecomment-82056296
@yhuai This patch supports two syntaxs. One is also supported by
HiveContext.
```
GROUP BY expression list WITH ROLLUP
GROUP BY expression list WITH CUBE
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5056#issuecomment-82076232
[Test build #28686 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28686/consoleFull)
for PR 5056 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5056#issuecomment-82076300
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user swkimme commented on the pull request:
https://github.com/apache/spark/pull/5046#issuecomment-82075586
@rxin
I tried to add a simple test like
```
test(collecting objects of class defined in repl - shuffling) {
val output =
Github user ksakellis commented on a diff in the pull request:
https://github.com/apache/spark/pull/5018#discussion_r26529056
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -340,7 +341,11 @@ class
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/5051#issuecomment-81953680
Mind including [SQL] in the title so that this gets properly sorted?
Thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user tdas commented on a diff in the pull request:
https://github.com/apache/spark/pull/4964#discussion_r26530402
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/scheduler/JobGenerator.scala
---
@@ -254,39 +271,97 @@ class JobGenerator(jobScheduler:
Github user realoptimal commented on the pull request:
https://github.com/apache/spark/pull/4960#issuecomment-81959279
Also if Slave resources are all of the default type, i.e. *. The framework
should be still be able to use those resources even with spark.mesos.role != *
---
If
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5018#issuecomment-81959272
That method looks correct given the scaladoc describing it.
Note that user code has two ways of affecting that method:
`SparkContext.requestExecutors`, which
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5018#issuecomment-81964335
(Just for completeness: `SparkContext` actually doesn't directly affect the
bookkeeping in `ExecutorAllocationManager`, which can be seen as a separate
issue. Meaning my
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/5029#issuecomment-81966491
What's the advantage of a parent directory created with `createTempDir`
when we're already using `File.createTempFile`?
---
If your project is set up for it, you can
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-81968401
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3074#issuecomment-81968372
[Test build #28679 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28679/consoleFull)
for PR 3074 at commit
Github user sryza commented on a diff in the pull request:
https://github.com/apache/spark/pull/5029#discussion_r26535353
--- Diff: core/src/test/scala/org/apache/spark/util/UtilsSuite.scala ---
@@ -370,7 +369,7 @@ class UtilsSuite extends FunSuite with
ResetSystemProperties {
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5029#issuecomment-81978100
LGTM (btw you're my hero).
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4634#issuecomment-81977973
@mccheah can you make the couple minor changes I suggested? Other than
that, this change lgtm.
---
If your project is set up for it, you can reply to this email and have
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/5019
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/5052#issuecomment-81989542
Is there an easy way to add a regression test for this?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/5018#issuecomment-81999174
I have two main concerns about this patch.
The first is that I think the logic in `CoarseGrainedSchedulerBackend` and
`ExecutorAllocationManager` is
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/4906#issuecomment-8238
It's hard to state a hard cutoff for task size, but the Spark programming
guide recommends tasks larger than about 20 KB are probably worth optimizing
[by
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4382#issuecomment-82003000
Closing it since #4885 has been merged.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user yinxusen commented on the pull request:
https://github.com/apache/spark/pull/5049#issuecomment-82011218
@mengxr Don't we need extra unittest? Does doctest well enough?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user swkimme commented on the pull request:
https://github.com/apache/spark/pull/5046#issuecomment-82017037
@rxin
Sure, I'll try to add some test.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4087#issuecomment-82025636
[Test build #28685 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28685/consoleFull)
for PR 4087 at commit
Github user baishuo commented on the pull request:
https://github.com/apache/spark/pull/3895#issuecomment-82048390
thank you @liancheng , I had study baishuo/spark#2 ï¼ and I think that
is good :) @marmbrus
---
If your project is set up for it, you can reply to this email
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4961#issuecomment-82054252
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4961#issuecomment-82054204
[Test build #28683 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28683/consoleFull)
for PR 4961 at commit
Github user debasish83 commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-82057716
@avulanov could you please point me to a stable branch that I can
experiment with..I am focused on collaborative filtering and implemented
various matrix
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/5043#discussion_r26545744
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -188,14 +188,13 @@ class DAGScheduler(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4964#issuecomment-82060498
[Test build #28693 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28693/consoleFull)
for PR 4964 at commit
Github user zsxwing commented on the pull request:
https://github.com/apache/spark/pull/4588#issuecomment-82060820
I added a non-blocking method `def asyncSetupEndpointRefByUrl(url: String):
Future[RpcEndpointRef]` so that people can retrieve `RpcEndpointRef` in the
message loop.
GitHub user lazyman500 opened a pull request:
https://github.com/apache/spark/pull/5059
[Spark-5068][SQL]Fix bug query data when path doesn't exist for HiveContext
This RP follow up PR #3907 #3891 #4356.
According to @marmbrus @liancheng 's comment,I try to use fs.globStatus
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4087#issuecomment-82072739
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/3895#issuecomment-82080400
[Test build #28690 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28690/consoleFull)
for PR 3895 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/3895#issuecomment-82080453
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/5049#discussion_r26547652
--- Diff: python/pyspark/mllib/common.py ---
@@ -70,8 +70,8 @@ def _py2java(sc, obj):
obj = _to_java_object_rdd(obj)
elif isinstance(obj,
Github user mengxr commented on the pull request:
https://github.com/apache/spark/pull/5049#issuecomment-82079679
Not necessary. doctests are examples+unittests.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5046#issuecomment-82079122
[Test build #28695 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28695/consoleFull)
for PR 5046 at commit
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/4906#discussion_r26539505
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/model/treeEnsembleModels.scala
---
@@ -108,6 +110,58 @@ class GradientBoostedTreesModel(
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/4906#discussion_r26539550
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/tree/model/treeEnsembleModels.scala
---
@@ -108,6 +110,58 @@ class GradientBoostedTreesModel(
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/3636#discussion_r26540822
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
---
@@ -219,4 +265,39 @@ object GradientDescent extends Logging {
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/3636#discussion_r26540817
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
---
@@ -219,4 +265,39 @@ object GradientDescent extends Logging {
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/3636#issuecomment-82005205
With this change, we should probably constrain convergenceTol to be in [0,
1]. Could you please add that to the doc add a check in setConvergenceTol?
Also,
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/3636#discussion_r26540819
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
---
@@ -219,4 +265,39 @@ object GradientDescent extends Logging {
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4961#issuecomment-82015600
[Test build #28683 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28683/consoleFull)
for PR 4961 at commit
Github user leahmcguire commented on a diff in the pull request:
https://github.com/apache/spark/pull/4087#discussion_r26542594
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/classification/NaiveBayes.scala ---
@@ -156,9 +181,14 @@ object NaiveBayesModel extends
GitHub user vanzin opened a pull request:
https://github.com/apache/spark/pull/5056
[SPARK-6371] [build] Update version to 1.4.0-SNAPSHOT.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/vanzin/spark SPARK-6371
Alternatively
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/5043#discussion_r26544667
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -188,14 +188,13 @@ class DAGScheduler(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4588#issuecomment-82058962
[Test build #28692 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28692/consoleFull)
for PR 4588 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5057#issuecomment-82083653
[Test build #28691 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28691/consoleFull)
for PR 5057 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5057#issuecomment-82083697
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5046#issuecomment-82082655
[Test build #28696 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28696/consoleFull)
for PR 5046 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5036#issuecomment-82004900
[Test build #28682 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28682/consoleFull)
for PR 5036 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5036#issuecomment-82004907
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user vanzin commented on the pull request:
https://github.com/apache/spark/pull/5018#issuecomment-82024746
+1 to this code being confusing and overly complicated. There are 3 places
tracking executor state (ExecutorAllocationManager,
CoarseGrainedSchedulerBackend and
Github user jongyoul commented on the pull request:
https://github.com/apache/spark/pull/4170#issuecomment-82031855
@tnachen @elyast I made a new issue about configuring mesos executor cores.
https://issues.apache.org/jira/browse/SPARK-6350
---
If your project is set up for it, you
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4964#issuecomment-82041518
[Test build #28688 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28688/consoleFull)
for PR 4964 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5057#issuecomment-82054615
[Test build #28691 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28691/consoleFull)
for PR 5057 at commit
Github user baishuo commented on the pull request:
https://github.com/apache/spark/pull/3895#issuecomment-82054358
@marmbrus no problem, let me resolve the conflicts :)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If
Github user cloud-fan commented on a diff in the pull request:
https://github.com/apache/spark/pull/5043#discussion_r26546179
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -188,14 +188,13 @@ class DAGScheduler(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/5058#issuecomment-82062795
[Test build #28694 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28694/consoleFull)
for PR 5058 at commit
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/5045#discussion_r26546512
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/CheckAnalysis.scala
---
@@ -102,4 +102,8 @@ class CheckAnalysis {
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/5045#discussion_r26546785
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicOperators.scala
---
@@ -179,6 +179,7 @@ case class Expand(
Github user chenghao-intel commented on a diff in the pull request:
https://github.com/apache/spark/pull/5045#discussion_r26546771
--- Diff:
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/plans/logical/basicOperators.scala
---
@@ -179,6 +179,7 @@ case class Expand(
Github user debasish83 commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-82070543
Also how is this https://github.com/apache/spark/pull/3222 different ? I am
confused for autoencoder which one is a better start...
---
If your project is set up for
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/5059#issuecomment-82072538
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4961#issuecomment-82081257
[Test build #28689 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28689/consoleFull)
for PR 4961 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4964#issuecomment-82081843
[Test build #28693 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28693/consoleFull)
for PR 4964 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4961#issuecomment-82081327
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4964#issuecomment-82081893
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user yinxusen commented on a diff in the pull request:
https://github.com/apache/spark/pull/5049#discussion_r26542049
--- Diff: python/pyspark/mllib/common.py ---
@@ -70,8 +70,8 @@ def _py2java(sc, obj):
obj = _to_java_object_rdd(obj)
elif
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4961#issuecomment-82017408
[Test build #28684 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28684/consoleFull)
for PR 4961 at commit
Github user leahmcguire commented on a diff in the pull request:
https://github.com/apache/spark/pull/4087#discussion_r26543828
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/classification/NaiveBayes.scala ---
@@ -35,26 +39,30 @@ import org.apache.spark.sql.{DataFrame,
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/5052#issuecomment-82028679
Doh, I looked at this too quickly and somehow mixed this up with one of the
user classpath first options. This looks good to me, too.
---
If your project is set up
Github user chenghao-intel commented on the pull request:
https://github.com/apache/spark/pull/4885#issuecomment-82034566
Thank you very much @liancheng, I will create another PR for the
requirements that we discussed above, and also the minor issues.
---
If your project is set up
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4961#issuecomment-82039773
[Test build #28684 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28684/consoleFull)
for PR 4961 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4961#issuecomment-82039843
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user jerryshao commented on the pull request:
https://github.com/apache/spark/pull/4961#issuecomment-82047655
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not
1 - 100 of 443 matches
Mail list logo