Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/646#issuecomment-42763279
A better solution [PR 730](https://github.com/apache/spark/pull/730)
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/688
The org.datanucleus:* should not be packaged into spark-assembly-*.jar
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1644
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/332#issuecomment-42771828
@pwendell @marmbrus @rxin
typesafehub/scala-logging#4 has been solved by typesafehub/scala-logging#15
The PR should be able to merge into master master
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/332#discussion_r12512415
--- Diff: core/src/main/scala/org/apache/spark/Logging.scala ---
@@ -116,7 +121,8 @@ trait Logging {
val log4jInitialized
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/332#discussion_r12512463
--- Diff: project/SparkBuild.scala ---
@@ -317,6 +317,7 @@ object SparkBuild extends Build {
val excludeFastutil = ExclusionRule(organization
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/590#issuecomment-42792564
@pwendell
Big changes have been removed.
The PR can be merged into master and branch-1.0.
---
If your project is set up for it, you can reply to this email
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/590#issuecomment-42809856
@srowen
Has been removed
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/590#issuecomment-42809563
@srowen
In some cases,`commons-lang` has multiple version dependency.
`fairscheduler.xml`,`hive-site.xml` should be ignored
---
If your project is set up
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/590#issuecomment-42811589
@srowen
I will submit a new Pull Request to solve this problem.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/590#issuecomment-42809975
```
[INFO] | +- org.apache.hadoop:hadoop-client:jar:1.0.4:compile
[INFO] | | \- org.apache.hadoop:hadoop-core:jar:1.0.4:compile
[INFO
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/754#issuecomment-42911672
@srowen have time to review the code?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/677
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/694#issuecomment-43038842
@mateiz
What do you think of [this
demo](https://github.com/witgo/spark/compare/SPARK-1712_new3)?
---
If your project is set up for it, you can reply to this email
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/694#issuecomment-43044976
@mateiz
Unit testing has been added
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/694
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/677#discussion_r12366698
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -414,6 +415,14 @@ private[spark] class TaskSetManager(
// we
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/694#issuecomment-43046219
@mateiz
The problem seems to be the current master branch.
Local test no problem
---
If your project is set up for it, you can reply to this email and have your
GitHub user witgo reopened a pull request:
https://github.com/apache/spark/pull/694
[SPARK-1712]: TaskDescription instance is too big causes Spark to hang
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/694#issuecomment-43064571
I do not know what causes the error:
org.apache.spark.SparkException: Job aborted due to stage failure: Task
2.0:0 failed 1 times, most recent failure: Exception
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/694#issuecomment-42975919
There is [another
solution](https://github.com/witgo/spark/compare/SPARK-1712_new3)
---
If your project is set up for it, you can reply to this email and have your
reply
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/773#issuecomment-43106294
log:
```
[info] ReplSuite:
[info] - propagation of local properties (4 seconds, 979 milliseconds)
[info] - simple foreach with accumulator (4 seconds, 150
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/694
SPARK-1712: TaskDescription instance is too big causes Spark to hang
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1712_new
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/776#discussion_r12675655
--- Diff:
core/src/main/scala/org/apache/spark/rdd/ParallelCollectionRDD.scala ---
@@ -128,18 +137,17 @@ private object ParallelCollectionRDD
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/773
Fix: sbt test throw an java.lang.OutOfMemoryError: PermGen space
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark sbt_javaOptions
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/786
Improve maven plugin configuration
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark maven_plugin
Alternatively you can review
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/713#issuecomment-43298969
@pwendell
Code style issues has been fixed
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/760#issuecomment-43309863
In your code:
`sc.parallelize(1L to 2L,4).zip(sc.parallelize(11 to 12,4)).collect`
=
`Array[(Long, Int)] = Array((1,11), (2,12))` Â
This is the right
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/794#issuecomment-43288330
Related work #713
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/590
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/760#issuecomment-43394756
@kanzhang
```
scala sc.parallelize((1D to 2D).by(0.2),4).collect
res0: Array[Double] = Array(1.0, 1.2, 1.6, 1.8)
```
```
scala sc.parallelize
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/694#discussion_r12765545
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -140,8 +141,29 @@ class
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/760#issuecomment-43396698
@kanzhang
All told, we should fix the following code
`slices += r.take(sliceSize).asInstanceOf[Seq[T]]`.
---
If your project is set up for it, you can reply
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/811
Convert spark.cleaner.ttl.* to lowercase
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark MetadataCleanerType
Alternatively you can
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/760#issuecomment-43433310
A simple solution
```scala
object ParallelCollectionRDD {
/**
* Slice a collection into numSlices sub-collections. One extra thing we
do here
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/760#issuecomment-43461175
@kanzhang
```
scala val d=(1D to 2D).by(0.2)
d: scala.collection.immutable.NumericRange[Double] = NumericRange(1.0, 1.2,
1.4, 1.5999
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/820
[WIP][SPARK-1875]:NoClassDefFoundError: StringUtils when building agains...
...t Hadoop 1
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/820#issuecomment-43465158
@pwendel
Do you have time to review the code?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/820#issuecomment-43465677
Oh, I'm sorry.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/820#issuecomment-43468949
@mateiz
This problem only occurs in spark-assembly_2.10,will not affect user
testing.
```
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/820#issuecomment-43470508
@mateiz
`spark-examples_2.10` is consistent with the situation you say
```
[INFO]
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @
spark
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/820#issuecomment-43471608
@srowen
I agree with what @mateiz said. We should not exclude commons-lang.
---
If your project is set up for it, you can reply to this email and have your
reply
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/824
SPARK-1875:NoClassDefFoundError: StringUtils when building against Hadoo...
...p 1
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/824#issuecomment-43475257
@srowen
The following code in line with your thoughts?
https://github.com/witgo/spark/compare/SPARK-1875_new
---
If your project is set up for it, you can reply
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/824#issuecomment-43476568
Has been modified.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/828
[WIP]Improve ALS resource usage
Now,In ALS algorithm, RDD can not be cleaned
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/760#issuecomment-43579042
I mean,`NumericRange[Double]` different methods get different results.
So we just guarantee `slice` method return consistent results.
---
If your project is set up
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/820
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-43581291
@tdas CheckpointRDD is not properly cleaned.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-43581755
@mateiz Why the checkpoint data must be written to the file system?.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-43583620
@mateiz It is not necessary to write it in the file system.After all,
there is no other RDD in reading it.I think it should be put checkpoint data
into blockManager, so
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-43589674
[The
code](https://github.com/witgo/spark/commit/6d7f2408a40bf4bb2889bf66fa61bced782cdefc#diff-2b593e0b4bd6eddab37f04968baa826c)
will make the checkpoint directory larger
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/811
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-43608181
@mateiz @mengxr
I added a new operation `cachePoint` of RDD
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-43656940
Another [solution](https://github.com/witgo/spark/compare/cachePoint).
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-43790944
@mateiz, @mengxr
I am using [the code](https://github.com/witgo/spark/compare/cachePoint) to
test ALS.
A brief description of the test:
| Item
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-43840745
@tdas
You're right. the code breaks the fault-tolerance properties of RDDs.
The perfect solution is the automatic cleanup and rebuilding shuffle data.
---
If your
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/855
Automatically cleanup checkpoint date
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark cleanup_checkpoint_date
Alternatively you can
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/855#issuecomment-43969051
@tdas
Optional? Default is off?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/855#issuecomment-43971489
@mridulm @tdas
The code has been updated.
Now, automatically clean up checkpoint data is optional
---
If your project is set up for it, you can reply
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-44122991
I am using [the
code](https://github.com/witgo/spark/compare/cleanup_checkpoint_date_als) to
test ALS.
A brief description of the test:
| Item | Description
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/889#issuecomment-44235434
spark-hive = commons-codec 1.4
spark-sql = commons-codec 1.5
```
[INFO]
[INFO
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/894
[SPARK-1930] Container memory beyond limit, were killed
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1930
Alternatively you
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/894#discussion_r13131123
--- Diff:
yarn/stable/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala
---
@@ -90,6 +90,12 @@ private[yarn] class YarnAllocationHandler
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/894#issuecomment-44424182
I agree with @sryza .Spark automatically handle these better. Of course, we
can allow users to manually specify the special value.
---
If your project is set up
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/907#issuecomment-44487759
@colorant
This is a big changes. Can you explain this change reason?
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/655#discussion_r13227671
--- Diff:
yarn/stable/src/main/scala/org/apache/spark/deploy/yarn/YarnAllocationHandler.scala
---
@@ -105,278 +96,222 @@ private[yarn] class
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/921
In some cases, yarn does not automatically restart the container
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark allocateExecutors
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/921#issuecomment-44719589
@sryza
When `yarnAllocator.getNumExecutorsFailed` return value is greater than
zero .
`yarnAllocator.getNumExecutorsRunning args.numExecutors` is true forever
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/894#discussion_r13259895
--- Diff:
yarn/alpha/src/main/scala/org/apache/spark/deploy/yarn/ExecutorLauncher.scala
---
@@ -92,21 +92,22 @@ class ExecutorLauncher(args
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/929
Improve ALS algorithm resource usage
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark improve_als
Alternatively you can review
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/828
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/828#issuecomment-44742037
This solution is not perfect. temporarily close this. The new #929 .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/894#issuecomment-4410
@mridulm
The following code in line with your thoughts?
https://github.com/witgo/spark/compare/SPARK-1930_different
---
If your project is set up for it, you can
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/940
update breeze to version 0.8.1
`breeze 0.8.1` dependent on `scala-logging-slf4j 2.1.1` The relevant code
on #332
You can merge this pull request into a Git repository by running:
$ git pull
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/940#issuecomment-44911857
@markhamstra , `breeze 0.7 ` does not support `scala 2.11` .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/940#issuecomment-44965251
I think ,`breeze` no big change from `0.7` to `0.8.1`. Of course, this
conclusion has not been a lot of testing
---
If your project is set up for it, you can reply
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/952#issuecomment-44983923
@CrazyJvm
I think we should also modify
[SparkSubmitArguments.scala#L99](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/952#issuecomment-44985234
[SparkSubmitArguments.scala#L127](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala#L127)
We can
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/952#issuecomment-45043437
```scala
// Global defaults. These should be keep to minimum to avoid confusing
behavior.
master = Option(master).getOrElse
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/929#issuecomment-45069637
@mengxr
By calling this method `RDD.checkpoint`, `ContextCleaner` can clean up the
shuffle data, reduce disk usage.
Just as described in the table below
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/929#issuecomment-45070672
As for the above data. One iteration write `160G` shuffle data . Three
iterations will have occupied `480G` hard disk
---
If your project is set up for it, you can reply
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/929#issuecomment-45071297
@mengxr
Since I only have three test server, I need more time to test your ideas.
---
If your project is set up for it, you can reply to this email and have your
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/969
[WIP] In yarn.ClientBase spark.yarn.dist.* do not work
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark yarn_ClientBase
Alternatively
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/921#discussion_r13425322
--- Diff:
yarn/stable/src/main/scala/org/apache/spark/deploy/yarn/ExecutorLauncher.scala
---
@@ -204,9 +204,17 @@ class ExecutorLauncher(args
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/969#issuecomment-45224988
Spark configuration
`conf/spark-defaults.conf` =
```
spark.yarn.dist.archives /toona/conf
spark.executor.extraClassPath ./conf
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/894#discussion_r13473687
--- Diff:
yarn/common/src/main/scala/org/apache/spark/deploy/yarn/ClientBase.scala ---
@@ -65,6 +65,18 @@ trait ClientBase extends Logging {
val
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/921#discussion_r13474143
--- Diff:
yarn/stable/src/main/scala/org/apache/spark/deploy/yarn/ApplicationMaster.scala
---
@@ -252,16 +252,12 @@ class ApplicationMaster(args
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/379
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/991
[WIP][SPARK-1477]: Add the lifecycle interface
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1477
Alternatively you can
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/1022
SPARK-1719: spark.executor.extraLibraryPath isn't applied on yarn
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1719
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/1877
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3050#issuecomment-65749267
@JoshRosen The code has been updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/3619#discussion_r21377222
--- Diff: graphx/src/main/scala/org/apache/spark/graphx/Pregel.scala ---
@@ -139,6 +146,14 @@ object Pregel extends Logging {
// get to send messages
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2631#issuecomment-65880798
@ankurdave
I have removed the Spark core related to modify
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2631#issuecomment-65887724
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3050#issuecomment-66392860
OK, I'll try. But
[CoarseMesosSchedulerBackend.scala#L156](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/scheduler/cluster/mesos
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3050#issuecomment-66399472
In my local test, it works.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3051#issuecomment-66557136
I'm sorry I forgot to update this PR. The Code has been updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3051#issuecomment-66563972
That seems to be unrelated test fails.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/3677
[SPARK-4526][MLLIB]Gradient should be added batch computing interface.
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-4526
501 - 600 of 866 matches
Mail list logo