Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2326#issuecomment-55064478
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1983#issuecomment-55094784
@allwefantasy
æçæµè¯è¯æåºå¤§å°æ¯`196558` 个ææ¡£, `7897767` 个è¯.
è¿ä»£æ¬¡æ°æ¯`100`次.
ä½ ç24ä¸ææ¡£æ»å
±æå¤ä¸ªè¯?
ä½ å¯ä»¥è´´å
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1983#issuecomment-55095890
@allwefantasy
æ认为è¿éç代ç ` Document(parts(0).toInt,(0 until
wordInfo.value.size).map(k= values.getOrElse(k,0)).toArray)`
æ¯æç¹é®é¢ç..
åºè
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1983#issuecomment-55096559
@srowen I will try to translate the comments into English
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/2346#issuecomment-55132817
The relevant PR: #991
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1330#issuecomment-55210927
The code has been updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1330#discussion_r17401380
--- Diff: pom.xml ---
@@ -839,7 +839,6 @@
arg-unchecked/arg
arg-deprecation/arg
arg-feature/arg
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1983#issuecomment-55223673
@allwefantasy Sparkæ¯å¯ä»¥è°æ´executoråæ¶è¿è¡çtaskæ°éç.
å¦æä½ æ³è®©æ¯ä¸ªexecutoråæ¶å¯ä»¥è¿è¡17个task.
å¯ä»¥å¨`conf/spark
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1330#issuecomment-55236678
No postfix warnings in 179ba61 .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1983#issuecomment-55280269
@allwefantasy ç°æç代ç
å¨è¿ä»£è®¡ç®è¿ç¨ä¸å建äºå¤ªå¤çTopicModelå®ä¾,
æç°å¨æ£å¨å°è¯è§£å³è¿ä¸ªé®é¢.
æè°¢ä½ çåé¦.
---
If your
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1482#issuecomment-61363413
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/3050
Spark shell class path is not correctly set if
spark.driver.extraClassPath is set in defaults.conf
You can merge this pull request into a Git repository by running:
$ git pull https
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/3051
[SPARK-4161]Spark shell class path is not correctly set if
spark.driver.extraClassP...
...ath is set in defaults.conf.(branch-1.1 backport)
You can merge this pull request into a Git repository
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/3069
[Minor] Minor bug fixes in bin/run-example
`./sbt/sbt clean assembly` =
`examples/target/scala-2.10/spark-examples_2-10-1.2.0-SNAPSHOT-hadoop1.0.4.jar`
You can merge this pull request
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3069#issuecomment-61449081
`mvn package` generated file is like this
`spark-examples-1.2.0-SNAPSHOT-*` .
`./sbt/sbt clean assembly` generated file is like this
`spark-examples_2-10-1.2.0
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3069#issuecomment-61463811
The current solution is simple to implement, and other source also used
this solution. eg:
[compute-classpath.cmd#L52](https://github.com/apache/spark/blob/master/bin
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/3082#discussion_r19935139
--- Diff: make-distribution.sh ---
@@ -181,6 +181,9 @@ echo Spark $VERSION$GITREVSTRING built for Hadoop
$SPARK_HADOOP_VERSION $DI
# Copy jars
cp
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-62085010
We should use matrix to calculate the forward propagation ,back propagation
see
http://deeplearning.stanford.edu/wiki/index.php/Neural_Network_Vectorization
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-62089562
We can not use existing Gradient classes,Let the whole iterative process is
completed in the form of matrix calculation.Moreover We can use the als
algorithm design, cut
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/1290#issuecomment-62254979
I agree with what @debasish83 said. We should find a suitable solution to
weight matrix distributed storage.
---
If your project is set up for it, you can reply
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/3222
[WIP][SPARK-4251][MLLIB]Add Restricted Boltzmann machine(RBM) algorithm to
MLlib
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark rbm
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3159#issuecomment-62700413
@pwendell @ScrapCodes
This patch has a bug:
`./make-distribution.sh -Dhadoop.version=2.3.0-cdh5.0.1
-Dyarn.version=2.3.0-cdh5.0.1 -Phadoop-2.3 -Pyarn
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/3228
[HOTFIX]: Fix maven build missing some class
The bug was caused by #3159
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3228#issuecomment-62714530
cc @pwendell @ScrapCodes @srowen
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3228#issuecomment-62849650
How about the following?
```xml
profile
idscala-2.10/id
activation
property
namescala.version/name
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3228#issuecomment-62851150
Yes, it seems to work.It seems that the user must explicitly set
`scala.version`.
---
If your project is set up for it, you can reply to this email and have your
reply
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/3228
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3222#issuecomment-63030582
Jenkins, retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/3281
[SPARK-4422][MLLIB]In some cases, Vectors.fromBreeze get wrong results.
cc @mengxr
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3222#issuecomment-63172769
Sorry, This patch is still work in process., I will add the annotation and
document at later.
BTW, My English is poor. we can communicate in email,This is more
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3277#issuecomment-63203941
[package.scala#L47](https://github.com/apache/spark/blob/master/core/src/main/scala/org/apache/spark/package.scala#L47)
should be modified
---
If your project is set up
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3222#issuecomment-63222980
Now, neural net model is stored in a matrix. The model is able to support
1000 * 500 * 100 three-layer neural network and 10*1000 two-layer neural
network
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/3222#discussion_r20410641
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/neuralNetwork/DBN.scala ---
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/3222#discussion_r20575084
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/neuralNetwork/DBN.scala ---
@@ -0,0 +1,67 @@
+/*
+ * Licensed to the Apache Software Foundation
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/1290#discussion_r20589805
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/ann/ArtificialNeuralNetwork.scala
---
@@ -0,0 +1,528 @@
+/*
+ * Licensed to the Apache Software
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/3399
[SPARK-4526][MLLIB]GradientDescent get a wrong gradient value according to
the gradient formula.
This is caused by the miniBatchSize parameter.The number of `RDD.sample`
returns is not fixed.
cc
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3399#issuecomment-63934659
AmplabJenkins retest this please.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3399#issuecomment-63941966
@mengxr I'm not sure. In my test of #3222, The convergence rate of SGD
less than expected. it should be affected by this issue.
---
If your project is set up
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/3399#discussion_r20754059
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/optimization/GradientDescent.scala
---
@@ -185,25 +184,29 @@ object GradientDescent extends Logging
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/3222#discussion_r20754174
--- Diff:
mllib/src/main/scala/org/apache/spark/mllib/neuralNetwork/StackedRBM.scala ---
@@ -0,0 +1,149 @@
+/*
+ * Licensed to the Apache Software
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/3399#issuecomment-64381732
@mengxr The title has been updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/480#discussion_r11933015
--- Diff: pom.xml ---
@@ -506,7 +508,45 @@
dependency
groupIdorg.apache.avro/groupId
artifactIdavro/artifactId
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/480#discussion_r11933105
--- Diff: pom.xml ---
@@ -793,6 +833,17 @@
/build
profiles
+!-- SPARK-1121: Adds an explicit dependency on Avro to work around
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/502#issuecomment-41237973
@berngp @pwendell ,
Whether we can delete the `yarn.version`, only using `hadoop.version`. This
will cause any problems?
---
If your project is set up for it, you can
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/502#issuecomment-41239453
@berngp
Most of the people uses the same version of HDFS vs YARN.
We can be so
```xml
hadoop.version1.0.4/hadoop.version
yarn.version
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/547
Fix SPARK-1609: Executor fails to start when Command.extraJavaOptions
contains multiple Java options
You can merge this pull request into a Git repository by running:
$ git pull https
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/510
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/547#discussion_r12023196
--- Diff:
core/src/main/scala/org/apache/spark/deploy/worker/CommandUtils.scala ---
@@ -48,7 +48,13 @@ object CommandUtils extends Logging {
def
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/569
Fix SPARK-1629: Spark should inline use of commons-lang `SystemUtils.IS_...
...OS_WINDOWS`
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/569#discussion_r12027354
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -1056,4 +1055,11 @@ private[spark] object Utils extends Logging {
def
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/581
improvements spark-submit usage
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1659
Alternatively you can review and apply
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/423#discussion_r12080480
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala
---
@@ -263,6 +263,26 @@ trait JavaRDDLike[T, This : JavaRDDLike[T, This]]
extends
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/423#discussion_r12081268
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala
---
@@ -263,6 +263,26 @@ trait JavaRDDLike[T, This : JavaRDDLike[T, This]]
extends
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/480#issuecomment-41646101
Cool!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/423#discussion_r12081885
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala
---
@@ -263,6 +263,26 @@ trait JavaRDDLike[T, This : JavaRDDLike[T, This]]
extends
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/423#discussion_r12082137
--- Diff: core/src/main/scala/org/apache/spark/api/java/JavaRDDLike.scala
---
@@ -263,6 +263,26 @@ trait JavaRDDLike[T, This : JavaRDDLike[T, This]]
extends
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/590
Improved build configuration â
¡
@berngp
I merge your code to this PR
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/598
[WIP] Improved build configuration III
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark sql-pom
Alternatively you can review and apply
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/598#issuecomment-41810943
@pwendell
Now, I have a very radical idea, removing the support sbt. What problems
will it have?
---
If your project is set up for it, you can reply to this email
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/590#discussion_r12181447
--- Diff: project/SparkBuild.scala ---
@@ -55,7 +55,7 @@ object SparkBuild extends Build {
val SCALAC_JVM_VERSION = jvm-1.6
val JAVAC_JVM_VERSION
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/590#issuecomment-41890856
@pwendell
Have removed travis changes
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/610#issuecomment-41911017
There is another solution #598
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/611
SPARK-1695: java8-tests compiler error: package com.google.common.co...
...llections does not exist
You can merge this pull request into a Git repository by running:
$ git pull https://github.com
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/590#issuecomment-42094359
@tgravescs
I tested many times, These are all can pass.
`mvn clean package -DskipTests -Pyarn-alpha -Dhadoop.version=0.23.7
-Phadoop-0.23`
`mvn clean package
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/624
SPARK-1699: Python relative independence from the core, becomes subprojects
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark python-api
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/590#issuecomment-42098647
@pwendell How about [this
solution](https://github.com/witgo/spark/commit/0ed124dc0e453a0a59d3c387651be970859a9a0a)?
Only exclusion the servlet-api 2.5 dependency
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/625#issuecomment-42102176
[The PR 590](https://github.com/apache/spark/pull/590) contains relevant
changes
---
If your project is set up for it, you can reply to this email and have your
reply
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/590#issuecomment-42102979
Hi @pwendell ,@srowen
All the change is very small,and [this
solution](https://github.com/witgo/spark/commit/0ed124dc0e453a0a59d3c387651be970859a9a0a)
only work
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/626
The default version of yarn is equal to the hadoop version
This is a part of [PR 590](https://github.com/apache/spark/pull/590)
You can merge this pull request into a Git repository by running
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/624#issuecomment-42107623
Branch is wrong, temporarily closed.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/624
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/468#issuecomment-42109604
@srowen Not every one uses the same version of HDFS vs YARN.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/468#issuecomment-42110042
@srowen Related discussion in [PR
502](https://github.com/apache/spark/pull/502).
@berngp Can you explain the reason of not using the same version of HDFS
vs YARN
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/590#issuecomment-42120300
@pwendell
I did not notice here, has been modified
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/626#discussion_r12259307
--- Diff: pom.xml ---
@@ -558,65 +560,8 @@
artifactIdjets3t/artifactId
version0.7.1/version
/dependency
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/626#discussion_r12259340
--- Diff: pom.xml ---
@@ -558,65 +560,8 @@
artifactIdjets3t/artifactId
version0.7.1/version
/dependency
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/626#discussion_r12259611
--- Diff: pom.xml ---
@@ -558,65 +560,8 @@
artifactIdjets3t/artifactId
version0.7.1/version
/dependency
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/626#discussion_r12259715
--- Diff: pom.xml ---
@@ -558,65 +560,8 @@
artifactIdjets3t/artifactId
version0.7.1/version
/dependency
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/628
SPARK-1693: Most of the tests throw a java.lang.SecurityException when s...
...park built for hadoop 2.3.0 , 2.4.0
You can merge this pull request into a Git repository by running:
$ git pull
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/629#discussion_r12261160
--- Diff: core/pom.xml ---
@@ -38,12 +38,6 @@
dependency
groupIdnet.java.dev.jets3t/groupId
artifactIdjets3t/artifactId
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/629#issuecomment-42132419
Looks good to me.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/631
SPARK-1699: Python relative independent, becomes a subproject
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1699
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/646
Add missing description to spark-env.sh.template
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark spark_env
Alternatively you can
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/665
SPARK-1734: spark-submit throws an exception: Exception in thread main...
... java.lang.ClassNotFoundException:
org.apache.spark.broadcast.TorrentBroadcastFactory
You can merge this pull request
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/631
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/677#discussion_r12364638
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -414,6 +415,14 @@ private[spark] class TaskSetManager(
// we
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/677#discussion_r12363925
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -414,6 +415,14 @@ private[spark] class TaskSetManager(
// we
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/414
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/677#discussion_r12364078
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskSetManager.scala ---
@@ -414,6 +415,14 @@ private[spark] class TaskSetManager(
// we
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/677
SPARK-1712: TaskDescription instance is too big causes Spark to hang
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark SPARK-1712
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/713
[WIP] update scalatest to version 2.1.5
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark scalatest
Alternatively you can review
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/677#issuecomment-42442064
@pwendell
How about [this
solution](https://github.com/witgo/spark/compare/SPARK-1712_new)?
---
If your project is set up for it, you can reply to this email and have
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/712
fix building spark with maven documentation
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark building-with-maven
Alternatively you can
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/714#issuecomment-42729492
ãSPARK-1779ã = [SPARK-1779]
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user witgo commented on the pull request:
https://github.com/apache/spark/pull/688#issuecomment-42730561
@pwendell
Has been updated
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/646#discussion_r12507330
--- Diff: conf/spark-env.sh.template ---
@@ -38,6 +38,7 @@
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/728
remove outdated runtime Information scala home
You can merge this pull request into a Git repository by running:
$ git pull https://github.com/witgo/spark scala_home
Alternatively you can
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/646#discussion_r12507598
--- Diff: conf/spark-env.sh.template ---
@@ -38,6 +38,7 @@
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/646
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/646#discussion_r12507619
--- Diff: conf/spark-env.sh.template ---
@@ -38,6 +38,7 @@
# - SPARK_WORKER_INSTANCES, to set the number of worker processes per node
401 - 500 of 866 matches
Mail list logo