Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4586#issuecomment-76902644
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4586#issuecomment-76902638
[Test build #28225 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28225/consoleFull)
for PR 4586 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4869#issuecomment-76903032
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4869#issuecomment-76903025
[Test build #28224 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28224/consoleFull)
for PR 4869 at commit
Github user watermen commented on the pull request:
https://github.com/apache/spark/pull/4586#issuecomment-76903606
@adrian-wang I try `if (proc.isInstanceOf[Driver] ||
proc.isInstanceOf[SetProcessor] || proc.isInstanceOf[AddResourceProcessor])`
like you before, but the exception
Github user adrian-wang commented on the pull request:
https://github.com/apache/spark/pull/4586#issuecomment-76904201
@watermen Can you try again, with this code?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user viirya commented on the pull request:
https://github.com/apache/spark/pull/4870#issuecomment-76902839
CC @marmbrus.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user viirya opened a pull request:
https://github.com/apache/spark/pull/4870
[SPARK-6134][SQL] Fix wrong datatype for casting FloatType and default
LongType value in defaultPrimitive
In `CodeGenerator`, the casting on `FloatType` should use `FloatType`
instead of
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4870#issuecomment-76903145
[Test build #28226 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28226/consoleFull)
for PR 4870 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4620#issuecomment-76908737
@Sephiroth-Lin you will have to close it yourself
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user watermen commented on the pull request:
https://github.com/apache/spark/pull/4586#issuecomment-76908627
@adrian-wang I think `sparkContext.addJar` as same as `bin/spark-sql --jars
xxx.jar`, but the former is fail and the latter is success.
---
If your project is set up
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4848#discussion_r25671346
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -774,7 +778,7 @@ private[spark] class Master(
case fnf:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4848#issuecomment-76909978
[Test build #28227 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28227/consoleFull)
for PR 4848 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4872#issuecomment-76912787
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4872#issuecomment-76914295
[Test build #28230 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28230/consoleFull)
for PR 4872 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4872#issuecomment-76916058
Isn't it a little extreme to remove the tests? what about just excluding
the Guava dep so that 14.0 is used? it may just work
---
If your project is set up for it, you
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/2765#issuecomment-76939929
This is getting better. I am still concerned about the MiMa excludes. There
should be no API changes, so, there should be no excludes. Those are still in
this PR.
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/2765#discussion_r25684896
--- Diff:
streaming/src/main/scala/org/apache/spark/streaming/api/java/JavaStreamingContext.scala
---
@@ -293,6 +302,34 @@ class JavaStreamingContext(val
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4875#issuecomment-76964290
Can one of the admins verify this patch?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
GitHub user matuskik opened a pull request:
https://github.com/apache/spark/pull/4875
[SPARK-6139] [Streaming] Allow pre-populate sliding window with initial ...
Each new computed window in WindowedDStream checks for empty slots and
fills them with the initial list of supplied
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4876#issuecomment-76969712
[Test build #28234 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28234/consoleFull)
for PR 4876 at commit
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/4876
SPARK-5143 [BUILD] [WIP] spark-network-yarn 2.11 depends on
spark-network-shuffle 2.10
Update `scala.binary.version` prop in POM when switching between Scala
2.10/2.11
@ScrapCodes for
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4206#issuecomment-76961879
At this point I'd say let's just remove the class if it's not used.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4872#issuecomment-76911443
[Test build #28229 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28229/consoleFull)
for PR 4872 at commit
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4055#issuecomment-76916713
@cloud-fan yeah I hear you although this method is defined in the
superclass, not just made up for this PR.
---
If your project is set up for it, you can reply to this
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4869#issuecomment-76909031
I just confirmed locally that this fix is effective. I ran the
`JavaAPISuite` 100 times back-to-back on my local machine. *Before* this patch
the fail count was 5
Github user liyezhang556520 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4848#discussion_r25671520
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -774,7 +778,7 @@ private[spark] class Master(
case fnf:
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4871#issuecomment-76910010
[Test build #28228 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28228/consoleFull)
for PR 4871 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4848#issuecomment-76921128
[Test build #28227 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28227/consoleFull)
for PR 4848 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4871#issuecomment-76921197
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4848#issuecomment-76921141
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4871#issuecomment-76921184
[Test build #28228 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28228/consoleFull)
for PR 4871 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4874#issuecomment-76973737
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
GitHub user squito opened a pull request:
https://github.com/apache/spark/pull/4877
[SPARK-5949] HighlyCompressedMapStatus needs more classes registered w/ kryo
https://issues.apache.org/jira/browse/SPARK-5949
You can merge this pull request into a Git repository by running:
$
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4874#issuecomment-76973717
[Test build #28233 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28233/consoleFull)
for PR 4874 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4877#issuecomment-76977216
[Test build #28235 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28235/consoleFull)
for PR 4877 at commit
Github user twinkle-sachdeva commented on a diff in the pull request:
https://github.com/apache/spark/pull/4845#discussion_r25685088
--- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
@@ -370,6 +370,7 @@ class SparkContext(config: SparkConf) extends Logging
with
Github user o-mdr commented on the pull request:
https://github.com/apache/spark/pull/4871#issuecomment-76942656
def stop() {
SparkContext.SPARK_CONTEXT_CONSTRUCTOR_LOCK.synchronized {
This line should have prevented 2 threads from going into the inner `{...}`
GitHub user srowen opened a pull request:
https://github.com/apache/spark/pull/4873
SPARK-4044 [CORE] Thriftserver fails to start when JAVA_HOME points to JRE
instead of JDK
So, I think it would be a step too far to tell people they have to run
Spark with a JDK instead of a JRE.
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4654#issuecomment-76941161
[Test build #28231 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28231/consoleFull)
for PR 4654 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4873#issuecomment-76944665
[Test build #28232 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28232/consoleFull)
for PR 4873 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4876#issuecomment-76987849
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4876#issuecomment-76987840
[Test build #28234 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28234/consoleFull)
for PR 4876 at commit
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/4634#issuecomment-76991315
Yeah, the argument is that's never needed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user MechCoder commented on the pull request:
https://github.com/apache/spark/pull/4819#issuecomment-76990366
Ouch. I just realised what you meant.. Scratch my previous couple of
comments. :/
---
If your project is set up for it, you can reply to this email and have your
Github user MechCoder commented on the pull request:
https://github.com/apache/spark/pull/4819#issuecomment-77003845
@jkbradley Just one quick clarification, please.
When you mean `evaluateEachIteration` should return an Array of Doubles, do
you mean that each element
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4873#discussion_r25714873
--- Diff: bin/compute-classpath.sh ---
@@ -121,10 +124,15 @@ datanucleus_jars=$(find $datanucleus_dir
2/dev/null | grep datanucleus-.*\\
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4845#discussion_r25717369
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/HistoryPage.scala ---
@@ -34,18 +37,31 @@ private[spark] class HistoryPage(parent:
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4879#issuecomment-77016196
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4877#discussion_r25722025
--- Diff:
core/src/test/scala/org/apache/spark/serializer/KryoSerializerSuite.scala ---
@@ -242,6 +244,15 @@ class KryoSerializerSuite extends FunSuite with
Github user davies commented on the pull request:
https://github.com/apache/spark/pull/4863#issuecomment-77008830
LGTM
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
GitHub user ravipesala opened a pull request:
https://github.com/apache/spark/pull/4878
[SPARK-5920][CORE]BufferedInputStream is added at required places
BufferedInputStream and BufferedOutputStream is added at required places.
You can merge this pull request into a Git repository
Github user aarondav commented on a diff in the pull request:
https://github.com/apache/spark/pull/4859#discussion_r25719272
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/jdbc/JDBCRelation.scala ---
@@ -111,17 +113,20 @@ private[sql] class DefaultSource extends
Github user aarondav commented on the pull request:
https://github.com/apache/spark/pull/4859#issuecomment-77020949
I think it would also be reasonable to add to the SQLContext#jdbc() methods
a parameter for `options: Map[String, String]` like was done for
createExternalTable. This
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/4869#issuecomment-77005785
In this case, if the cleaner is in the middle of cleaning a broadcast,
for instance, it will do so through SparkEnv.get.blockManager, which could be
one that belongs
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/4869#issuecomment-77006078
@andrewor14 Do you think that there's any risk of a cleanup task hanging
indefinitely and thus preventing the SparkContext from being stopped? That's
the only problem
Github user davies commented on a diff in the pull request:
https://github.com/apache/spark/pull/4863#discussion_r25714389
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/BinaryClassificationMetrics.scala
---
@@ -53,6 +54,13 @@ class BinaryClassificationMetrics(
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4874#discussion_r25715262
--- Diff: docs/building-spark.md ---
@@ -9,6 +9,10 @@ redirect_from: building-with-maven.html
Building Spark using Maven requires Maven 3.0.4 or
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/4206#issuecomment-76997091
@Lewuathe Would you have time to close this and do a new PR to remove it?
If not, I could make it part of my PR for
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4863#issuecomment-77010112
[Test build #28237 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28237/consoleFull)
for PR 4863 at commit
Github user nishkamravi2 commented on the pull request:
https://github.com/apache/spark/pull/4836#issuecomment-77018231
This number has gone up from 5-8% to 7-10% for the same set of workloads
with recent versions of Spark/YARN (no idea why). Bumping the default up seems
like the
Github user coderxiang commented on the pull request:
https://github.com/apache/spark/pull/4879#issuecomment-77021387
Good job!
Do we have enough time to catch the release, especially if there are some
incompatible APIs.
For my case, the coefficients differ but the performance
Github user haoyuan commented on the pull request:
https://github.com/apache/spark/pull/4867#issuecomment-77006434
@aarondav @pwendell
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4873#discussion_r25714556
--- Diff: bin/compute-classpath.sh ---
@@ -121,10 +124,15 @@ datanucleus_jars=$(find $datanucleus_dir
2/dev/null | grep datanucleus-.*\\
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4878#issuecomment-77009074
[Test build #28236 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28236/consoleFull)
for PR 4878 at commit
GitHub user dbtsai opened a pull request:
https://github.com/apache/spark/pull/4879
[SPARK-6141][MLlib] Upgrade Breeze from 0.10 to 0.11 to fix convergence bug
LBFGS and OWLQN in Breeze 0.10 has convergence check bug.
This is fixed in 0.11, see the description in Breeze project
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4845#discussion_r25718529
--- Diff:
core/src/main/scala/org/apache/spark/deploy/history/HistoryPage.scala ---
@@ -34,18 +37,31 @@ private[spark] class HistoryPage(parent:
Github user sryza commented on the pull request:
https://github.com/apache/spark/pull/4836#issuecomment-77018973
Bumping the default to 10% seems reasonable to me as well.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well.
Github user nishkamravi2 commented on the pull request:
https://github.com/apache/spark/pull/4219#issuecomment-77022278
@zsxwing @andrewor14 I'm noticing a significant performance regression
with this commit (SPARK-6142). Commenting out finalize recovers performance (as
expected).
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4873#discussion_r25713616
--- Diff: bin/compute-classpath.sh ---
@@ -121,10 +124,15 @@ datanucleus_jars=$(find $datanucleus_dir
2/dev/null | grep datanucleus-.*\\
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4879#issuecomment-77014949
[Test build #28238 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28238/consoleFull)
for PR 4879 at commit
Github user mengxr commented on a diff in the pull request:
https://github.com/apache/spark/pull/4863#discussion_r25713371
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/evaluation/BinaryClassificationMetrics.scala
---
@@ -53,6 +54,13 @@ class BinaryClassificationMetrics(
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4879#issuecomment-77016190
[Test build #28238 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28238/consoleFull)
for PR 4879 at commit
Github user srowen commented on a diff in the pull request:
https://github.com/apache/spark/pull/4878#discussion_r25714850
--- Diff:
core/src/main/scala/org/apache/spark/deploy/master/FileSystemPersistenceEngine.scala
---
@@ -58,7 +58,7 @@ private[spark] class
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4877#issuecomment-77023758
[Test build #28239 has
started](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28239/consoleFull)
for PR 4877 at commit
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4863#issuecomment-77025548
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4877#discussion_r25722448
--- Diff:
core/src/test/scala/org/apache/spark/serializer/KryoSerializerSuite.scala ---
@@ -23,8 +23,10 @@ import scala.reflect.ClassTag
import
Github user rxin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4877#discussion_r25724861
--- Diff:
core/src/test/scala/org/apache/spark/serializer/KryoSerializerSuite.scala ---
@@ -242,6 +244,24 @@ class KryoSerializerSuite extends FunSuite with
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/4087#discussion_r25727352
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/classification/NaiveBayesSuite.scala
---
@@ -85,19 +92,66 @@ class NaiveBayesSuite extends FunSuite
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/4087#discussion_r25727333
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/classification/NaiveBayes.scala ---
@@ -35,26 +52,27 @@ import org.apache.spark.sql.{DataFrame,
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/4877#discussion_r25727630
--- Diff:
core/src/test/scala/org/apache/spark/serializer/KryoSerializerSuite.scala ---
@@ -23,8 +23,10 @@ import scala.reflect.ClassTag
import
Github user jkbradley commented on a diff in the pull request:
https://github.com/apache/spark/pull/4087#discussion_r25727349
--- Diff:
mllib/src/test/scala/org/apache/spark/mllib/classification/NaiveBayesSuite.scala
---
@@ -85,19 +92,66 @@ class NaiveBayesSuite extends FunSuite
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4874#discussion_r25727437
--- Diff: docs/building-spark.md ---
@@ -9,6 +9,10 @@ redirect_from: building-with-maven.html
Building Spark using Maven requires Maven 3.0.4 or
Github user srowen commented on the pull request:
https://github.com/apache/spark/pull/4871#issuecomment-77047210
It makes _some_ tiny difference; my observation was that the `stopped`
flag's state can be observed outside this `synchronized` method. So its
interleaving with other
Github user jkbradley commented on the pull request:
https://github.com/apache/spark/pull/4087#issuecomment-77047237
I'm trying to see if there's a better solution for the NaiveBayesModelType
which will permit the same API in both Scala Java. I'll update soon.
---
If your project
Github user squito commented on a diff in the pull request:
https://github.com/apache/spark/pull/4877#discussion_r25724532
--- Diff:
core/src/test/scala/org/apache/spark/serializer/KryoSerializerSuite.scala ---
@@ -23,8 +23,10 @@ import scala.reflect.ClassTag
import
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4874#issuecomment-77041640
Yeah, LGTM I'm surprised this is not already documented. Merging into
master and 1.3. There were some conflicts to merge this into older branches, so
feel free to
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4877#issuecomment-77044700
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user marmbrus commented on the pull request:
https://github.com/apache/spark/pull/4872#issuecomment-77045250
+1 to moving these to integration tests, especially if they are causing
build problems. Can you make sure there is a JIRA somewhere (where?) to make
sure we don't
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4877#issuecomment-77044686
[Test build #28241 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28241/consoleFull)
for PR 4877 at commit
Github user SparkQA commented on the pull request:
https://github.com/apache/spark/pull/4863#issuecomment-77025532
[Test build #28237 has
finished](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/28237/consoleFull)
for PR 4863 at commit
Github user vanzin commented on a diff in the pull request:
https://github.com/apache/spark/pull/4877#discussion_r25722404
--- Diff:
core/src/test/scala/org/apache/spark/serializer/KryoSerializerSuite.scala ---
@@ -242,6 +244,15 @@ class KryoSerializerSuite extends FunSuite with
Github user andrewor14 commented on the pull request:
https://github.com/apache/spark/pull/4219#issuecomment-77033874
Thanks for reporting this. I will just revert this one altogether and
consider a different alternative after the release. This bug has been an issue
since 1.0 and is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4877#issuecomment-77035408
Test FAILed.
Refer to this link for build results (access rights to CI server needed):
Github user asfgit closed the pull request at:
https://github.com/apache/spark/pull/4874
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is
Github user AmplabJenkins commented on the pull request:
https://github.com/apache/spark/pull/4877#issuecomment-77045639
Test PASSed.
Refer to this link for build results (access rights to CI server needed):
Github user andrewor14 commented on a diff in the pull request:
https://github.com/apache/spark/pull/4844#discussion_r25732503
--- Diff: core/src/main/scala/org/apache/spark/deploy/master/Master.scala
---
@@ -837,7 +840,8 @@ private[spark] class Master(
driver.state =
Github user rxin commented on the pull request:
https://github.com/apache/spark/pull/4877#issuecomment-77032050
i'd really like to automate the style checker so it can catch more stuff
like this ... :)
---
If your project is set up for it, you can reply to this email and have your
Github user JoshRosen commented on the pull request:
https://github.com/apache/spark/pull/4869#issuecomment-77035463
There aren't great alternatives here because the root problem is that we
have a bunch of global shared state, so it's kind of hard to avoid
synchronization here
Github user dbtsai commented on the pull request:
https://github.com/apache/spark/pull/4879#issuecomment-77044998
@coderxiang Breeze seems to accidentally remove the public constructor of
CSCMatrix, and we have a PR to Breeze to address it. Let's see if we can make
it.
---
If your
1 - 100 of 294 matches
Mail list logo