Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r96106896
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -244,32 +245,45 @@ class
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r96108161
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/cluster/CoarseGrainedSchedulerBackend.scala
---
@@ -244,32 +245,45 @@ class
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r95492638
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskDescription.scala ---
@@ -52,7 +55,36 @@ private[spark] class TaskDescription(
val
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r95298151
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskDescription.scala ---
@@ -52,7 +55,36 @@ private[spark] class TaskDescription(
val
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r95933009
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskDescription.scala ---
@@ -52,7 +55,43 @@ private[spark] class TaskDescription(
val
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r95753220
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskDescription.scala ---
@@ -52,7 +55,43 @@ private[spark] class TaskDescription(
val
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r95304052
--- Diff: core/src/main/scala/org/apache/spark/util/Utils.scala ---
@@ -149,7 +149,12 @@ private[spark] object Utils extends Logging
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/15505
@squito In the local mode, the performance is relatively less important, we
only guarantee that there will be no performance degradation on it.
---
If your project is set up for it, you can reply
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r95305125
--- Diff:
core/src/test/scala/org/apache/spark/scheduler/TaskSetManagerSuite.scala ---
@@ -592,47 +579,6 @@ class TaskSetManagerSuite extends SparkFunSuite
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/15505
@kayousterhout I agree with you, and do as you say.
@squito These a good idea, and worth a try. We can write a prototype to
verify it.
---
If your project is set up for it, you can reply
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/15505
@kayousterhout
Okay, I'll do the code revision this weekend.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/15505
@kayousterhout OK, Done
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r93558564
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/local/LocalSchedulerBackend.scala
---
@@ -59,6 +62,12 @@ private[spark] class LocalEndpoint
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r93558472
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskDescription.scala ---
@@ -17,27 +17,179 @@
package org.apache.spark.scheduler
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r93558527
--- Diff:
core/src/main/scala/org/apache/spark/scheduler/TaskDescription.scala ---
@@ -17,27 +17,179 @@
package org.apache.spark.scheduler
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r93558450
--- Diff: core/src/main/scala/org/apache/spark/scheduler/DAGScheduler.scala
---
@@ -993,6 +993,12 @@ class DAGScheduler
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/15505
@kayousterhout @squito
I think Kay's approach is a good idea.
We can first merging #16053, SPARK-18890 related code(including
multi-threaded serialization TaskDescription) to stay
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r93558401
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -136,14 +136,10 @@ private[spark] class Executor(
startDriverHeartbeater
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/15505#discussion_r93558431
--- Diff: core/src/main/scala/org/apache/spark/executor/Executor.scala ---
@@ -232,6 +225,13 @@ private[spark] class Executor
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17329#discussion_r107841105
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/buffer/FileSegmentManagedBuffer.java
---
@@ -37,13 +37,24 @@
* A {@link
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17329#discussion_r108027203
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/buffer/FileSegmentManagedBuffer.java
---
@@ -37,13 +37,24 @@
* A {@link
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17329#discussion_r108032378
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/buffer/FileSegmentManagedBuffer.java
---
@@ -37,13 +37,24 @@
* A {@link
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/17329
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/17480
SPARK-20079: Re registration of AM hangs spark cluster in yarn-client mode.
When there is some need of task scheduling, `ExecutorAllocationManager`
instances do not reset the `initializing` field
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17329#discussion_r107706851
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/buffer/FileSegmentManagedBuffer.java
---
@@ -37,13 +37,24 @@
* A {@link
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17329#discussion_r107572297
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/buffer/FileSegmentManagedBuffer.java
---
@@ -37,13 +37,24 @@
* A {@link
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17329#discussion_r108049460
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/buffer/FileSegmentManagedBuffer.java
---
@@ -37,13 +37,24 @@
* A {@link
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17329#discussion_r106781598
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/buffer/FileSegmentManagedBuffer.java
---
@@ -37,13 +37,24 @@
* A {@link
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17329
```java
public class HadoopConfigProvider extends ConfigProvider {
private final Configuration conf;
public HadoopConfigProvider(Configuration conf) {
this.conf = conf
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/17329
[SPARK-19991]FileSegmentManagedBuffer performance improvement
FileSegmentManagedBuffer performance improvement.
## What changes were proposed in this pull request?
When we do
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r109575470
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,9 @@ private[spark] class ExecutorAllocationManager
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r110796578
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17480
@jerryshao Yes.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r53390
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r110361779
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r110804557
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,14 @@ private[spark] class ExecutorAllocationManager
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17480
OK, I will do the work at weekends.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17480#discussion_r112825043
--- Diff:
core/src/main/scala/org/apache/spark/ExecutorAllocationManager.scala ---
@@ -249,7 +249,6 @@ private[spark] class ExecutorAllocationManager
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/17116
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17139
@kayousterhout The test report has been updated.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/15505
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/15505
Yes, maybe a multithreaded serialization task code can have a better
performance, let me close the PR
---
If your project is set up for it, you can reply to this email and have your
reply appear
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/15505
[SPARK-18890_20170303](https://github.com/witgo/spark/commits/SPARK-18890_20170303)
`s code is older but the test case running time is 5.2 s
---
If your project is set up for it, you can reply
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17139
ping @kayousterhout @squito
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17139
Added the multi-threaded code for serialization `TaskDescription` .
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17567
LGTM.
Are there any performance test reports?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17567
OK, I see.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17480
The ExecutorAllocationManager.reset method is called when re-registering
AM, which sets the ExecutorAllocationManager.initializing field true. When this
field is true, the Driver does not start a new
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17480
@vanzin
Sorry, I do not understand what you mean. Do you submit a new PR to your
own ideas? If you can, I will close this PR.
---
If your project is set up for it, you can reply to this email
Github user witgo closed the pull request at:
https://github.com/apache/spark/pull/17882
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17882
I'm very sorry, I haven't taken the time to update this code recently.
@vanzin , thank you for your work. I'll close this PR.
---
If your project is set up for it, you can reply to this email
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/14995
I did not do much testing, but I think it can be used in the production
environment
the url:
https://github.com/witgo/spark/tree/SPARK-6235_Address_various_2G_limits
---
If your project is set
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/18008
@JoshRosen , what's the tool in your screenshot?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/18008
@JoshRosen I see, Thank you.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17139
@jiangxb1987 ,Yes do you have any questions?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17882
@jerryshao @vanzin
Would you take some time to review this PR?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17882#discussion_r120652302
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -68,6 +68,8 @@ private[spark] abstract
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/17882
@vanzin Done.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17882#discussion_r120644588
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -176,16 +179,6 @@ private[spark
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/17882#discussion_r121972913
--- Diff:
resource-managers/yarn/src/main/scala/org/apache/spark/scheduler/cluster/YarnSchedulerBackend.scala
---
@@ -176,16 +179,6 @@ private[spark
GitHub user witgo opened a pull request:
https://github.com/apache/spark/pull/17882
[WIP][SPARK-20079][try 2][yarn] Re registration of AM hangs spark cluster
in yarn-client mode.
See #17480
You can merge this pull request into a Git repository by running:
$ git pull https
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/21346#discussion_r195284967
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/client/TransportClient.java
---
@@ -220,30 +196,91 @@ public long sendRpc(ByteBuffer
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/21346#discussion_r195287202
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/protocol/UploadStream.java
---
@@ -0,0 +1,107 @@
+/*
+ * Licensed
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/21451#discussion_r191628993
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/server/RpcHandler.java
---
@@ -38,15 +38,24 @@
*
* This method
Github user witgo commented on a diff in the pull request:
https://github.com/apache/spark/pull/21451#discussion_r192279111
--- Diff:
common/network-common/src/main/java/org/apache/spark/network/server/RpcHandler.java
---
@@ -38,15 +38,24 @@
*
* This method
Github user witgo commented on the issue:
https://github.com/apache/spark/pull/14658
Spark 2.2 has fixed this issue.
---
-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail
801 - 866 of 866 matches
Mail list logo