Github user skyluc commented on the pull request:
https://github.com/apache/spark/pull/12613#issuecomment-215080708
Added comments, updated checks for cases when running in client mode,
removed sending back the hostname to the executor.
---
If your project is set up for it, you can
Github user skyluc commented on the pull request:
https://github.com/apache/spark/pull/12613#issuecomment-214336966
Are the test failures real, or due to flaky tests?
I tried to reproduced the failures locally, but `core/test` passes most of
the time.
---
If your project is set
GitHub user skyluc opened a pull request:
https://github.com/apache/spark/pull/12613
[SPARK-14849][CORE]Always set an address for the executor
As specified in
[SPARK-14849](https://issues.apache.org/jira/browse/SPARK-14849), the `address`
in the `NettyRpcEnv` for the executor
Github user skyluc commented on the pull request:
https://github.com/apache/spark/pull/11047#issuecomment-179741301
@andrewor14 yes, dynamic allocation works fine, but
`spark.dynamicAllocation.initialExecutors` is not used at start-up.
---
If your project is set up for it, you can
Github user skyluc commented on a diff in the pull request:
https://github.com/apache/spark/pull/11047#discussion_r51846148
--- Diff: docs/running-on-mesos.md ---
@@ -246,15 +246,15 @@ In either case, HDFS runs separately from Hadoop
MapReduce, without being schedu
GitHub user skyluc opened a pull request:
https://github.com/apache/spark/pull/11047
[SPARK-13002][Mesos] Send initial request of executors for dyn allocation
Fix for [SPARK-13002](https://issues.apache.org/jira/browse/SPARK-13002)
about the initial number of executors when running
Github user skyluc commented on the pull request:
https://github.com/apache/spark/pull/10921#issuecomment-175755038
Tested with spark-shell, spark-submit in client and cluster mode, and
embedded in an application.
It worked, throws the exception but doesn't kill the application
Github user skyluc commented on a diff in the pull request:
https://github.com/apache/spark/pull/10701#discussion_r49733021
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/MesosClusterScheduler.scala
---
@@ -318,7 +319,7 @@ private[spark] class
Github user skyluc commented on the pull request:
https://github.com/apache/spark/pull/10740#issuecomment-171337018
@srowen, I removed the extra line.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
GitHub user skyluc opened a pull request:
https://github.com/apache/spark/pull/10740
[SPARK-12805][Mesos] Fixes documentation on Mesos run modes
The default run has changed, but the documentation didn't fully reflect the
change.
You can merge this pull request into a Git
Github user skyluc commented on the pull request:
https://github.com/apache/spark/pull/10329#issuecomment-165459198
Fixed the 'style', added a comment, and switch to `filterKeys`.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub
Github user skyluc commented on the pull request:
https://github.com/apache/spark/pull/10332#issuecomment-165204770
Code LGTM. Unfortunately, I cannot try it before a couple of hours.
---
If your project is set up for it, you can reply to this email and have your
reply appear
GitHub user skyluc opened a pull request:
https://github.com/apache/spark/pull/10329
[SPARK-12345] [CORE] Do not send SPARK_HOME through Spark submit REST
interface
It is usually an invalid location on the remote machine executing the job.
It is picked up by the Mesos support
Github user skyluc commented on a diff in the pull request:
https://github.com/apache/spark/pull/8433#discussion_r38743996
--- Diff: core/src/main/scala/org/apache/spark/api/python/PythonRDD.scala
---
@@ -24,6 +24,7 @@ import java.util.{Collections, ArrayList => JArrayList,
L
Github user skyluc commented on the pull request:
https://github.com/apache/spark/pull/8433#issuecomment-137010371
Updated the changes for `@transient`. The annotation has been removed where
not needed, and the paremeter has been transformed in `@transient private val`
where needed
Github user skyluc commented on the pull request:
https://github.com/apache/spark/pull/8433#issuecomment-135661207
It indicates on which of the elements generated for the class parameter to
apply the annotation.
There are 4 possible elements generated for a class parameter
Github user skyluc commented on a diff in the pull request:
https://github.com/apache/spark/pull/8433#discussion_r37965919
--- Diff:
core/src/main/scala/org/apache/spark/util/logging/RollingFileAppender.scala ---
@@ -78,33 +78,31 @@ private[spark] class RollingFileAppender
Github user skyluc commented on a diff in the pull request:
https://github.com/apache/spark/pull/8433#discussion_r37972066
--- Diff: core/src/main/scala/org/apache/spark/Accumulators.scala ---
@@ -47,7 +48,7 @@ import org.apache.spark.util.Utils
* @tparam T partial data
GitHub user skyluc opened a pull request:
https://github.com/apache/spark/pull/8433
[SPARK-10227] fatal warnings with sbt on Scala 2.11
The bulk of the changes are on `@transient` annotation on class parameter.
Often the compiler doesn't generate a field for this parameters, so
Github user skyluc commented on the pull request:
https://github.com/apache/spark/pull/5966#issuecomment-99787306
Verified that the patch fixes the compilation problem.
A few comments:
* The method `withScope` with 3 parameters seems to be only used in the
tests
20 matches
Mail list logo